30

Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.

>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?

I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is

>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'

I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.

>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>>  charset = content_type_header.split('=')[1]

That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?

Clay Wardell
  • 14,846
  • 13
  • 44
  • 65

6 Answers6

29

To parse http header you could use cgi.parse_header():

_, params = cgi.parse_header('text/html; charset=utf-8')
print params['charset'] # -> utf-8

Or using the response object:

response = urllib2.urlopen('http://example.com')
response_encoding = response.headers.getparam('charset')
# or in Python 3: response.headers.get_content_charset(default)

In general the server may lie about the encoding or do not report it at all (the default depends on content-type) or the encoding might be specified inside the response body e.g., <meta> element in html documents or in xml declaration for xml documents. As a last resort the encoding could be guessed from the content itself.

You could use requests to get Unicode text:

import requests # pip install requests

r = requests.get(url)
unicode_str = r.text # may use `chardet` to auto-detect encoding

Or BeautifulSoup to parse html (and convert to Unicode as a side-effect):

from bs4 import BeautifulSoup # pip install beautifulsoup4

soup = BeautifulSoup(urllib2.urlopen(url)) # may use `cchardet` for speed
# ...

Or bs4.UnicodeDammit directly for arbitrary content (not necessarily an html):

from bs4 import UnicodeDammit

dammit = UnicodeDammit(b"Sacr\xc3\xa9 bleu!")
print(dammit.unicode_markup)
# -> Sacré bleu!
print(dammit.original_encoding)
# -> utf-8
jfs
  • 399,953
  • 195
  • 994
  • 1,670
7

If you happen to be familiar with the Flask/Werkzeug web development stack, you will be happy to know the Werkzeug library has an answer for exactly this kind of HTTP header parsing, and accounts for the case that the content-type is not specified at all, like you had wanted.

 >>> from werkzeug.http import parse_options_header
 >>> import requests
 >>> url = 'http://some.url.value'
 >>> resp = requests.get(url)
 >>> if resp.status_code is requests.codes.ok:
 ...     content_type_header = resp.headers.get('content_type')
 ...     print content_type_header
 'text/html; charset=utf-8'
 >>> parse_options_header(content_type_header) 
 ('text/html', {'charset': 'utf-8'})

So then you can do:

 >>> content_type_header[1].get('charset')
 'utf-8'

Note that if charset is not supplied, this will produce instead:

 >>> parse_options_header('text/html')
 ('text/html', {})

It even works if you don't supply anything but an empty string or dict:

 >>> parse_options_header({})
 ('', {})
 >>> parse_options_header('')
 ('', {})

Thus it seems to be EXACTLY what you were looking for! If you look at the source code, you will see they had your purpose in mind: https://github.com/mitsuhiko/werkzeug/blob/master/werkzeug/http.py#L320-329

def parse_options_header(value):
    """Parse a ``Content-Type`` like header into a tuple with the content
    type and the options:
    >>> parse_options_header('text/html; charset=utf8')
    ('text/html', {'charset': 'utf8'})
    This should not be used to parse ``Cache-Control`` like headers that use
    a slightly different format.  For these headers use the
    :func:`parse_dict_header` function.
    ...

Hope this helps someone some day! :)

Brian Peterson
  • 2,800
  • 6
  • 29
  • 36
5

The requests library makes this easy:

>>> import requests
>>> r = requests.get('http://some.url.value')
>>> r.encoding
'utf-8' # e.g.
dnozay
  • 23,846
  • 6
  • 82
  • 104
  • 3
    Except encoding detection in requests is incorrect (meta tags are not taken in account), and they are not willing to fix that (https://github.com/kennethreitz/requests/issues/1087). – Mikhail Korobov May 17 '17 at 10:13
  • 1
    Please see my answer here https://stackoverflow.com/a/52615216/520637, you can just use `requests.Response.apparent_encoding`. – bubak Oct 02 '18 at 19:39
3

Charsets can be specified in many ways, but it's often done so in the headers.

>>> urlopen('http://www.python.org/').info().get_content_charset()
'utf-8'
>>> urlopen('http://www.google.com/').info().get_content_charset()
'iso-8859-1'
>>> urlopen('http://www.python.com/').info().get_content_charset()
>>> 

That last one didn't specify a charset anywhere, so get_content_charset() returned None.

Cees Timmerman
  • 17,623
  • 11
  • 91
  • 124
  • 1
    It looks only in http headers that may lie. `` inside html document is more likely to be under control of the person who created the document than server's headers. Also there is no `get_content_charset()` in Python 2. [`cgi.parse_header()` works the same on Python 2 and 3](http://stackoverflow.com/a/13517891/4279). – jfs Oct 08 '14 at 16:31
  • this works great in python 3 as an initial check for the charset from the header info, you can check this first and if blank, then perform the BeautifulSoup check on the content itself. – james-see Jun 05 '15 at 17:56
2

To properly (i.e. in a browser-like way - we can't do better) decode html you need to take in account:

  1. Content-Type HTTP header value;
  2. BOM marks;
  3. <meta> tags in page body;
  4. Differences between encoding names defined used in web an encoding names available in Python stdlib;
  5. As a last resort, if everything else fails, guessing based on statistics is an option.

All of the above is implemented in w3lib.encoding.html_to_unicode function: it has html_to_unicode(content_type_header, html_body_str, default_encoding='utf8', auto_detect_fun=None) signature and returns (detected_encoding, unicode_html_content) tuple.

requests, BeautifulSoup, UnicodeDamnnit, chardet or flask's parse_options_header are not correct solutions as they all fail at some of these points.

Mikhail Korobov
  • 21,908
  • 8
  • 73
  • 65
  • I was looking for a solution that simply scans bytes and retrieves the encoding from meta tags. Really nice one! – evg656e Sep 17 '18 at 04:08
  • Thanks for pointing out w3lib library. Its best for my use case. Especially : w3lib.encoding.html_to_unicode – Musab Gultekin Feb 28 '23 at 13:05
0

This is what works for me perfectly. I am using python 2.7 and 3.4

print (text.encode('cp850','replace'))
Usama Tahir
  • 1,235
  • 12
  • 14