3

I've been fighting with this for an hour now. I'm parsing an XML-string with iterparse. However, the data is not encoded properly, and I am not the provider of it, so I can't fix the encoding.

Here's the error I get:

lxml.etree.XMLSyntaxError: line 8167: Input is not proper UTF-8, indicate encoding !
Bytes: 0xEA 0x76 0x65 0x73

How can I simply ignore this error and still continue on parsing? I don't mind, if one character is not saved properly, I just need the data.

Here's what I've tried, all picked from internet:

data = data.encode('UTF-8','ignore')
data = unicode(data,errors='ignore')
data = unicode(data.strip(codecs.BOM_UTF8), 'utf-8', errors='ignore')

Edit:
I can't show the url, as it's a private API and involves my API key, but this is how I obtain the data:

ur = urlopen(url)
data = ur.read()

The character that causes the problem is: å, I guess that ä & ö, etc, would also break it.

Here's the part where I try to parse it:

def fast_iter(context, func):
    for event, elem in context:
        func(elem)
        elem.clear()
        while elem.getprevious() is not None:
            del elem.getparent()[0]
    del context

def process_element(elem):
    print elem.xpath('title/text( )')

context = etree.iterparse(StringIO(data), tag='item')
fast_iter(context, process_element)

Edit 2:
This is what happens, when I try to parse it in PHP. Just to clarify, F***ing Åmål is a drama movie =D

The file starts with <?xml version="1.0" encoding="UTF-8" ?>

Here's what I get from print repr(data[offset-10:offset+60]):

ence des r\xeaves, La</title>\n\t\t<year>2006</year>\n\t\t<imdb>0354899</imdb>\n
Jason Evans
  • 28,906
  • 14
  • 90
  • 154
Martti Laine
  • 12,655
  • 22
  • 68
  • 102
  • does `data.decode('ISO-8859-2')` works? – jfs Feb 11 '12 at 22:12
  • @J.F.Sebastian May be weird, but this happens: `lxml.etree.XMLSyntaxError: Document is empty, line 1, column 1` – Martti Laine Feb 12 '12 at 09:04
  • if you do `somefile.read()` twice then the second `read()` returns empty string, [example](http://ideone.com/ARFFS). If the file starts with `` and you do `data.decode(encoding)` then you should get `ValueError: Unicode strings with encoding declaration are not supported.` – jfs Feb 12 '12 at 11:04
  • 1
    @MarttiLaine: `iso-8859-2` is a red herring; see my updated answer. – John Machin Feb 12 '12 at 11:56

5 Answers5

3

You say:

The character that causes the problem is: å,

How do you know that? What are you viewing your text with?

So you can't publish the URL and your API key; what about reading the data, writing it to a file (in binary mode), and publishing that?

When you open that file in your web browser, what encoding does it detect?

At the very least, do this

data.decode('utf8') # where data is what you get from ur.read()

This will produce an exception that will tell you the byte offset of the non-UTF-8 stuff.

Then do this:

print repr(data[offset-10:offset+60])

and show us the results.

Assuming the encoding is actually cp1252 and decoding the bytes in the lxml error message:

>>> guff = "\xEA\x76\x65\x73"
>>> from unicodedata import name
>>> [name(c) for c in guff.decode('1252')]
['LATIN SMALL LETTER E WITH CIRCUMFLEX', 'LATIN SMALL LETTER V', 'LATIN SMALL LE
TTER E', 'LATIN SMALL LETTER S']
>>>

So are you seeing e-circumflex followed by ves, or a-ring followed by ves, or a-ring followed by something else?

Does the data start with an XML declaration like <?xml version="1.0" encoding="UTF-8"?>? If not, what does it start with?

Clues for encoding guessing/confirmation: What language is the text written in? What country?

UPDATE based on further information supplied.

Based on the snippet that you showed in the vicinity of the error, the movie title is "La science des rêves" (the science of dreams).

Funny how PHP gags on "F***ing Åmål" but Python chokes on French dreams. Are you sure that you did the same query?

You should have told us it was IMDB up front, you would have got your answer much sooner.

SOLUTION before you pass data to the lxml parser, do this:

data = data.replace('encoding="UTF-8"', 'encoding="iso-8859-1"')

That's based on the encoding that they declare on their website, but that may be a lie too. In that case, try cp1252 instead. It's definitely not iso-8859-2.

John Machin
  • 81,303
  • 11
  • 141
  • 189
0

Iterparse allows you to override xml encodings in the document using its keyword-argument "encoding" (see https://lxml.de/api/lxml.etree.iterparse-class.html). In your code above, you could also write

context = etree.iterparse(StringIO(data), tag='item', encoding='iso-8859-1') 

to deal with all european characters in the file.

Marc Steffen
  • 113
  • 1
  • 7
0

However, the data is not encoded properly, and I am not the provider of it, so I can't fix the encoding.

It is encoded somehow. Determine the encoding, and specify that encoding, instead of UTF-8 encoding (since that is obviously not the encoding).

Karl Knechtel
  • 62,466
  • 11
  • 102
  • 153
  • I am trying to make it UTF-8, shouldn't I be able to convert any string into UTF-8? Or do I have to know the original encoding to do that? PHP's `mb_detect_encoding()` gives me `UTF-8`. – Martti Laine Feb 11 '12 at 21:38
  • 2
    UTF-8 isn't "something you convert the string into"; it's a possibility for how data is encoded. You're receiving **bytes** and transforming them into a **unicode string**, which is a higher-level concept that can't really exist as such on disk (or be transmitted as such over the network). Try showing more of the code, like the part where you actually retrieve the data and attempt to work with it. Try letting us know where the data comes from so we can inspect it. – Karl Knechtel Feb 11 '12 at 21:40
  • @MarttiLaine: If PHP says that it is believing the lie in the XML declaration at the start of the XML stream. – John Machin Feb 12 '12 at 11:46
-1

You can use encode with 'replace' - >>> unicode('\x80abc', errors='replace') this way the bad character is replaced with a valid one - u'\ufffdabc'

WeaselFox
  • 7,220
  • 8
  • 44
  • 75
-1

To recover from errors during parsing you could use recover option (some data might be ignored in this case):

import urllib2
from lxml import etree

data = urllib2.urlopen(URL).read()
root = etree.fromstring(data, parser=etree.XMLParser(recover=True))
for item in root.iter('item'):
    # process item here

To override the document encoding you could use:

parser=etree.XMLParser(encoding=ENCODING)

Here's how feedparser detects character encoding (it is not trivial).

jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • The OP mentions the use of `iterparse` module. Your answer requires the whole XML to be loaded into memory and may not work for large XML files. – Dexter Mar 16 '13 at 18:11
  • @mcenley: thank you for the comment. It is likely that an xml document available via web fits in memory. `XMLParser(encoding=ENCODING)` is a valid (and more robust) alternative to manually manipulating the xml declaration with string substitutions. And `recover=True` allows to recover part of the corrupted document if there are more than one character encoding e.g., [microsoft smart quotes in otherwise utf-8 document](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#inconsistent-encodings) – jfs Mar 17 '13 at 00:09