15

How would you read an XML file using sax and convert it to a lxml etree.iterparse element?

To provide an overview of the problem, I have built an XML ingestion tool using lxml for an XML feed that will range in the size of 25 - 500MB that needs ingestion on a bi-daily basis, but needs to perform a one time ingestion of a file that is 60 - 100GB's.

I had chosen to use lxml based on the specifications that detailed a node would not exceed 4 -8 GB's in size which I thought would allow the node to be read into memory and cleared when finished.

An overview if the code is below

elements = etree.iterparse(
    self._source, events = ('end',)
)
for event, element in elements:
    finished = True
    if element.tag == 'Artist-Types':
        self.artist_types(element)

def artist_types(self, element):
    """
    Imports artist types

    :param list element: etree.Element
    :returns boolean:
    """
    self._log.info("Importing Artist types")
    count = 0
    for child in element:
        failed = False
        fields = self._getElementFields(child, (
            ('id', 'Id'),
            ('type_code', 'Type-Code'),
            ('created_date', 'Created-Date')
        ))
        if self._type is IMPORT_INC and has_artist_type(fields['id']):
            if update_artist_type(fields['id'], fields['type_code']):
                count = count + 1
            else:
                failed = True
        else:
            if create_artist_type(fields['type_code'],
                fields['created_date'], fields['id']):
                count = count + 1
            else:
                failed = True
        if failed:
            self._log.error("Failed to import artist type %s %s" %
                (fields['id'], fields['type_code'])
            )
    self._log.info("Imported %d Artist Types Records" % count)
    self._artist_type_count = count
    self._cleanup(element)
    del element

Let me know if I can add any type of clarification.

Nick
  • 763
  • 1
  • 11
  • 26
  • So what is the question? Did you get an error message? – Jim Garrison Mar 21 '12 at 17:13
  • 3
    The question is in the first sentence ... why the downvote? – Nick Mar 21 '12 at 17:14
  • Your question is a bit strange. Why are you using SAX at all? iterparse is *an alternative to* SAX. You could generate iterparse events from SAX events, but why would anyone do that? – Francis Avila Mar 21 '12 at 17:15
  • From my understanding lxml does not stream the file and reads it entirely into memory ( or at least the node being read ). To stream it I would need to use SAX but I have already built the entire ingestion in lxml and a conversion is out of the question. – Nick Mar 21 '12 at 17:17
  • 1
    `iterparse` does not read the entire file into memory. It builds a tree, but incrementally. Just delete nodes after you are finished processing them using `clear()` – Francis Avila Mar 21 '12 at 18:33

3 Answers3

33

iterparse is an iterative parser. It will emit Element objects and events and incrementally build the entire Element tree as it parses, so eventually it will have the whole tree in memory.

However, it is easy to have a bounded memory behavior: delete elements you don't need anymore as you parse them.

The typical "giant xml" workload is a single root element with a large number of child elements which represent records. I assume this is the kind of XML structure you are working with?

Usually it is enough to use clear() to empty out the element you are processing. Your memory usage will grow a little but it's not very much. If you have a really huge file, then even the empty Element objects will consume too much and in this case you must also delete previously-seen Element objects. Note that you cannot safely delete the current element. The lxml.etree.iterparse documentation describes this technique.

In this case, you will process a record every time a </record> is found, then you will delete all previous record elements.

Below is an example using an infinitely-long XML document. It will print the process's memory usage as it parses. Note that the memory usage is stable and does not continue growing.

from lxml import etree
import resource

class InfiniteXML(object):

    def __init__(self):
        self._root = True

    def read(self, len=None):
        if self._root:
            self._root = False
            return "<?xml version='1.0' encoding='US-ASCII'?><records>\n"
        else:
            return """<record>\n\t<ancestor attribute="value">text value</ancestor>\n</record>\n"""

def parse(fp):
    context = etree.iterparse(fp, events=('end',))
    for action, elem in context:
        if elem.tag == 'record':
            # processing goes here
            pass
        
        # memory usage
        print resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
        
        # cleanup
        # first empty children from current element
            # This is not absolutely necessary if you are also deleting siblings,
            # but it will allow you to free memory earlier.
        elem.clear()
        # second, delete previous siblings (records)
        while elem.getprevious() is not None:
            del elem.getparent()[0]
        # make sure you have no references to Element objects outside the loop

parse(InfiniteXML())
Augustin
  • 2,444
  • 23
  • 24
Francis Avila
  • 31,233
  • 6
  • 58
  • 96
  • There is no single "root" node, rather the data is broken down into 20+ "root" nodes each containing their own subsets. The current tool works in a somewhat similar fashion as your code in regards to the removal of any unneeded nodes once processed and this allows for processing quite a large chunk of the data but once I attempt to process one of the larger nodes "I'm assuming larger than 8GB's in size" the process will Segment (at the for loop) ```for action, elem in context:``` which is leading me believe that it's being read into memory. – Nick Mar 22 '12 at 02:44
  • Could you show some sample XML? The code you posted only appears to show one major element type. Iterparse is not reading the entire file into memory so it's a matter of dividing your workflow into smaller subtrees that *do* fit into memory, and deleting everything after each iteration. – Francis Avila Mar 22 '12 at 03:23
  • 1
    The code posted above is about as much as I can give unfortunately, but with that said after rewriting a good portion of the ingestion the import now works using your above approach. See the following snippet for the code https://gist.github.com/2161849. – Nick Mar 22 '12 at 19:06
4

I found this helpful example at http://effbot.org/zone/element-iterparse.htm. Bold emphasis is mine.

Incremental Parsing #

Note that iterparse still builds a tree, just like parse, but you can safely rearrange or remove parts of the tree while parsing. For example, to parse large files, you can get rid of elements as soon as you’ve processed them:

for event, elem in iterparse(source):
    if elem.tag == "record":
        ... process record elements ...
        elem.clear()

The above pattern has one drawback; it does not clear the root element, so you will end up with a single element with lots of empty child elements. If your files are huge, rather than just large, this might be a problem. To work around this, you need to get your hands on the root element. The easiest way to do this is to enable start events, and save a reference to the first element in a variable:

# get an iterable 
context = iterparse(source, events=("start", "end"))

# turn it into an iterator 
context = iter(context)

# get the root element 
event, root = context.next()

for event, elem in context:
    if event == "end" and elem.tag == "record":
        ... process record elements ...
        root.clear()

(future releases will make it easier to access the root element from within the loop)

Community
  • 1
  • 1
  • Thanks for the answer but I have already explored this and at least from my testing the node is still being read entirely into memory and is not streamed – Nick Mar 21 '12 at 18:21
-2

This is a couple of years old and I don't have enough reputation to comment directly on the accepted answer, but I tried using this to parse an OSM where I am finding all intersections in a country. My original issue was that I was running out of RAM, so I thought I'd have to use the SAX parser but found this answer instead. Strangely it wasn't parsing correctly, and using the suggested cleanup somehow was clearing the elem node before reading through it (still not sure how this was happening). Removed elem.clear() from the code and now it runs perfectly fine!

amper
  • 31
  • 1
  • 5