Looking for an easy way to get the charset/encoding information of an HTTP response using Python urllib2, or any other Python library.
>>> url = 'http://some.url.value'
>>> request = urllib2.Request(url)
>>> conn = urllib2.urlopen(request)
>>> response_encoding = ?
I know that it is sometimes present in the 'Content-Type' header, but that header has other information, and it's embedded in a string that I would need to parse. For example, the Content-Type header returned by Google is
>>> conn.headers.getheader('content-type')
'text/html; charset=utf-8'
I could work with that, but I'm not sure how consistent the format will be. I'm pretty sure it's possible for charset to be missing entirely, so I'd have to handle that edge case. Some kind of string split operation to get the 'utf-8' out of it seems like it has to be the wrong way to do this kind of thing.
>>> content_type_header = conn.headers.getheader('content-type')
>>> if '=' in content_type_header:
>>> charset = content_type_header.split('=')[1]
That's the kind of code that feels like it's doing too much work. I'm also not sure if it will work in every case. Does anyone have a better way to do this?