22

I have a xml like this:

<a>
    <b>hello</b>
    <b>world</b>
</a>
<x>
    <y></y>
</x>
<a>
    <b>first</b>
    <b>second</b>
    <b>third</b>
</a>

I need to iterate through all <a> and <b> tags, but I don't know how many of them are in document. So I use xpath to handle that:

from lxml import etree

doc = etree.fromstring(xml)

atags = doc.xpath('//a')
for a in atags:
    btags = a.xpath('b')
    for b in btags:
            print b

It works, but I have pretty big files, and cProfile shows me that xpath is very expensive to use.

I wonder, maybe there is there more efficient way to iterate through indefinitely number of xml-elements?

Cœur
  • 37,241
  • 25
  • 195
  • 267
nukl
  • 10,073
  • 15
  • 42
  • 58

4 Answers4

27

XPath should be fast. You can reduce the number of XPath calls to one:

doc = etree.fromstring(xml)
btags = doc.xpath('//a/b')
for b in btags:
    print b.text

If that is not fast enough, you could try Liza Daly's fast_iter. This has the advantage of not requiring that the entire XML be processed with etree.fromstring first, and parent nodes are thrown away after the children have been visited. Both of these things help reduce the memory requirements. Below is a modified version of fast_iter which is more aggressive about removing other elements that are no longer needed.

def fast_iter(context, func, *args, **kwargs):
    """
    fast_iter is useful if you need to free memory while iterating through a
    very large XML file.

    http://lxml.de/parsing.html#modifying-the-tree
    Based on Liza Daly's fast_iter
    http://www.ibm.com/developerworks/xml/library/x-hiperfparse/
    See also http://effbot.org/zone/element-iterparse.htm
    """
    for event, elem in context:
        func(elem, *args, **kwargs)
        # It's safe to call clear() here because no descendants will be
        # accessed
        elem.clear()
        # Also eliminate now-empty references from the root node to elem
        for ancestor in elem.xpath('ancestor-or-self::*'):
            while ancestor.getprevious() is not None:
                del ancestor.getparent()[0]
    del context

def process_element(elt):
    print(elt.text)

context=etree.iterparse(io.BytesIO(xml), events=('end',), tag='b')
fast_iter(context, process_element)

Liza Daly's article on parsing large XML files may prove useful reading to you too. According to the article, lxml with fast_iter can be faster than cElementTree's iterparse. (See Table 1).

Community
  • 1
  • 1
unutbu
  • 842,883
  • 184
  • 1,785
  • 1,677
  • What is the purpose of `doc = etree.fromstring(xml)` in the fast_iter code?? – John Machin Jan 14 '11 at 21:41
  • 1
    iterparse speed war: As the article states, lxml is faster IF you select one particular tag, and for general parsing (you need to examine multiple tags), cElementTree is faster. – John Machin Jan 14 '11 at 22:04
  • doesnt seem to be up-to-date anymore: processing a valid, well-formed 10gig file on different systems having 8gig of ram causes python 3.7.2 to crash the system somewhat after 7gig of the file have been read. not this solution nor any other that's based on iterparse() works. first, all is fine with a load of about 20mb of ram. then, it stumbles and crashes the system. – meistermuh Jul 29 '19 at 15:54
13

How about iter?

>>> for tags in root.iter('b'):         # root is the ElementTree object
...     print tags.tag, tags.text
... 
b hello
b world
b first
b second
b third
user225312
  • 126,773
  • 69
  • 172
  • 181
5

Use iterparse:

   import lxml.etree as ET
   for event, elem in ET.iterparse(filelike_object):
        if elem.tag == "a":
            process_a(elem)
            for child in elem:
                process_child(child)
            elem.clear() # destroy all child elements
        elif elem.tag != "b":
            elem.clear()

Note that this doesn't save all the memory, but I've been able to wade through XML streams of over a Gb using this technique.

Try import xml.etree.cElementTree as ET ... it comes with Python and its iterparse is faster than the lxml.etree iterparse, according to the lxml docs:

"""For applications that require a high parser throughput of large files, and that do little to no serialization, cET is the best choice. Also for iterparse applications that extract small amounts of data or aggregate information from large XML data sets that do not fit into memory. If it comes to round-trip performance, however, lxml tends to be multiple times faster in total. So, whenever the input documents are not considerably larger than the output, lxml is the clear winner."""

John Machin
  • 81,303
  • 11
  • 141
  • 189
-2

bs4 is very useful for this

from bs4 import BeautifulSoup
raw_xml = open(source_file, 'r')
soup = BeautifulSoup(raw_xml)
soup.find_all('tags')
Garf365
  • 3,619
  • 5
  • 29
  • 41