30

I'm writing a crawler with Python using BeautifulSoup, and everything was going swimmingly till I ran into this site:

http://www.elnorte.ec/

I'm getting the contents with the requests library:

r = requests.get('http://www.elnorte.ec/')
content = r.content

If I do a print of the content variable at that point, all the spanish special characters seem to be working fine. However, once I try to feed the content variable to BeautifulSoup it all gets messed up:

soup = BeautifulSoup(content)
print(soup)
...
<a class="blogCalendarToday" href="/component/blog_calendar/?year=2011&amp;month=08&amp;day=27&amp;modid=203" title="1009 artículos en este día">
...

It's apparently garbling up all the spanish special characters (accents and whatnot). I've tried doing content.decode('utf-8'), content.decode('latin-1'), also tried messing around with the fromEncoding parameter to BeautifulSoup, setting it to fromEncoding='utf-8' and fromEncoding='latin-1', but still no dice.

Any pointers would be much appreciated.

David
  • 3,226
  • 3
  • 23
  • 18

5 Answers5

34

In your case this page has wrong utf-8 data which confuses BeautifulSoup and makes it think that your page uses windows-1252, you can do this trick:

soup = BeautifulSoup.BeautifulSoup(content.decode('utf-8','ignore'))

by doing this you will discard any wrong symbols from the page source and BeautifulSoup will guess the encoding correctly.

You can replace 'ignore' by 'replace' and check text for '?' symbols to see what has been discarded.

Actually it's a very hard task to write crawler which can guess page encoding every time with 100% chance(Browsers are very good at this nowadays), you can use modules like 'chardet' but, for example, in your case it will guess encoding as ISO-8859-2, which is not correct too.

If you really need to be able to get encoding for any page user can possibly supply - you should either build a multi-level(try utf-8, try latin1, try etc...) detection function(like we did in our project) or use some detection code from firefox or chromium as C module.

spazm
  • 4,399
  • 31
  • 30
Riz
  • 817
  • 1
  • 8
  • 21
  • https://github.com/LuminosoInsight/python-ftfy is another library for cleaning up mojibake which is getting rave reviews. – tripleee Dec 15 '20 at 05:13
  • If you are working with a local (in my case: html) file then this works: `soup = BeautifulSoup(open("C:\\path\\to\\your\\html\\file.html", encoding="utf8"), "html.parser")` – Sr. Schneider Jan 04 '21 at 09:15
20

could you try:

r = urllib.urlopen('http://www.elnorte.ec/')
x = BeautifulSoup.BeautifulSoup(r.read)
r.close()

print x.prettify('latin-1')

I get the correct output. Oh, in this special case you could also x.__str__(encoding='latin1').

I guess this is because the content is in ISO-8859-1(5) and the meta http-equiv content-type incorrectly says "UTF-8".

Could you confirm?

Gaikokujin Kun
  • 399
  • 2
  • 5
  • Hi Gaikokujin, thanks for your answer. You're quite right, if I prettify it with the 'latin-1' parameter, I get the string back with all the right accents and all. However, I need to go through the soup to process the links, and if I try to make a soup out of the string again, it messes up the accents again. – David Aug 28 '11 at 20:10
  • Actually, never mind, now I'm getting an error when trying your suggestion: UnicodeEncodeError: 'latin-1' codec can't encode characters in position 62-63: ordinal not in range(256) – David Aug 28 '11 at 20:36
  • It seems to work again if i do: x = BeautifulSoup.BeautifulSoup(r.read(), fromEncoding='latin-1'), but again, if I try to make a new soup out of the prettified string, it messes it up again :/ – David Aug 28 '11 at 20:39
  • 2
    Finally got it, just had to: soup = BeautifulSoup(content, fromEncoding='latin-1') then when it got time to parse the links: i_title = item.contents[0].encode('latin-1').decode('utf-8') that seemed to do the trick. Thanks for your help :) – David Aug 28 '11 at 20:46
  • The code seems to be wrong (double `BeatifulSoup`?): AttributeError: type object 'BeautifulSoup' has no attribute 'BeautifulSoup' - maybe the interface changed? – S.B. Mar 30 '16 at 12:49
  • it works correctly if you print the outcome, but if you write on file by doing `file.write(str(x.prettify('latin-1')))`, it'll display so many wildcard character like `\n` and destroy the formatting. Any workaround for that? – Harshil Doshi Oct 10 '19 at 20:52
7

You can try this, which works for every encoding

from bs4 import BeautifulSoup
from bs4.dammit import EncodingDetector
headers = {"User-Agent": USERAGENT}
resp = requests.get(url, headers=headers)
http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
encoding = html_encoding or http_encoding
soup = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
Derlin
  • 9,572
  • 2
  • 32
  • 53
Shawn
  • 571
  • 7
  • 8
  • 2
    nice answer, but I would drop the `headers` (not really needed and since you didn't define `USERAGENT` the code cannot be blindly copy-pasted). – Derlin Jun 29 '18 at 09:19
5

I'd suggest taking a more methodical fool proof approach.

# 1. get the raw data 
raw = urllib.urlopen('http://www.elnorte.ec/').read()

# 2. detect the encoding and convert to unicode 
content = toUnicode(raw)    # see my caricature for toUnicode below

# 3. pass unicode to beautiful soup. 
soup = BeautifulSoup(content)


def toUnicode(s):
    if type(s) is unicode:
        return s
    elif type(s) is str:
        d = chardet.detect(s)
        (cs, conf) = (d['encoding'], d['confidence'])
        if conf > 0.80:
            try:
                return s.decode( cs, errors = 'replace' )
            except Exception as ex:
                pass 
    # force and return only ascii subset
    return unicode(''.join( [ i if ord(i) < 128 else ' ' for i in s ]))

You can reason no matter what you throw at this, it will always send valid unicode to bs.

As a result your parsed tree will behave much better and not fail in newer more interesting ways every time you have new data.

Trial and Error doesnt work in Code - There are just too many combinations :-)

vpathak
  • 1,133
  • 12
  • 12
2

The first answer is right, this functions some times are efective.

    def __if_number_get_string(number):
        converted_str = number
        if isinstance(number, int) or \
            isinstance(number, float):
                converted_str = str(number)
        return converted_str


    def get_unicode(strOrUnicode, encoding='utf-8'):
        strOrUnicode = __if_number_get_string(strOrUnicode)
        if isinstance(strOrUnicode, unicode):
            return strOrUnicode
        return unicode(strOrUnicode, encoding, errors='ignore')

    def get_string(strOrUnicode, encoding='utf-8'):
        strOrUnicode = __if_number_get_string(strOrUnicode)
        if isinstance(strOrUnicode, unicode):
            return strOrUnicode.encode(encoding)
        return strOrUnicode
Tabares
  • 4,083
  • 5
  • 40
  • 47