73

When the content-type of the server is 'Content-Type:text/html', requests.get() returns improperly encoded data.

However, if we have the content type explicitly as 'Content-Type:text/html; charset=utf-8', it returns properly encoded data.

Also, when we use urllib.urlopen(), it returns properly encoded data.

Has anyone noticed this before? Why does requests.get() behave like this?

APhillips
  • 1,175
  • 9
  • 17
arunk2
  • 2,246
  • 3
  • 23
  • 35

4 Answers4

105

Educated guesses (mentioned above) are probably just a check for Content-Type header as being sent by server (quite misleading use of educated imho).

For response header Content-Type: text/html the result is ISO-8859-1 (default for HTML4), regardless any content analysis (ie. default for HTML5 is UTF-8).

For response header Content-Type: text/html; charset=utf-8 the result is UTF-8.

Luckily for us, requests uses chardet library and that usually works quite well (attribute requests.Response.apparent_encoding), so you usually want to do:

r = requests.get("https://martin.slouf.name/")
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding
# access the data
r.text
bubak
  • 1,464
  • 1
  • 13
  • 11
  • 1
    The approach with `r.encoding = r.apparent_encoding` didn't work (é showed up as é) for a web page where line 13 of 374 is ``. However, changing to `r.encoding = 'UTF-8'` worked ok. One could have code to search `r.text` for a `"Content-Type" ... charset=...` entry, then set `r.encoding` before accessing `r.text` further. This would be clunky but more general than just setting the encoding to UTF-8. – James Waldby - jwpat7 Jan 21 '22 at 05:23
  • Well, it is a guess after all ;). I suppose you realize that `r.apparent_encoding` value is set by chardet library -- and of course -- it can guess wrong. You should also be aware that you should not access `r.text` _before_ setting the `r.encoding` to desired value (using `r.apparent_encoding` or any method desirable). I recommend reading the chardet library docs (https://chardet.readthedocs.io/en/latest/), if you are attempting to guess it your way -- it can offer a solution you seek. – bubak Jan 22 '22 at 22:05
  • ok. Note, re "should not access r.text before setting the r.encoding to desired value", some doc I looked at (and now can't find) gave impression it is ok to repeatedly set different encodings and then access .text if you want to see different encodings. ¶ But a doc looked at just now implies that's not so. ¶ Re chardet, I see it has methods that would be less ad hoc than searching for a `charset=...` entry. Thanks! – James Waldby - jwpat7 Jan 27 '22 at 02:07
  • This was a great solution for me. I was using requests and Beautiful Soup to do web scraping. At first I thought the issue was with Beautiful Soup and I was ready to dive into its documentation to figure what it does with respect to UTF-8. Before that though, I checked the string returned with `.text` on my response object. It had the badly-encoded characters. In my case, it looked like `19% ± 3%â\x96¼` for text that should actually be `19% ± 3%▼`. `encoding` was "ISO-8859-1" and `apparent_encoding` was "UTF-8". By setting `encoding` to `apparent_encoding`, then getting `text`, it worked. – Matt Welke Apr 08 '23 at 17:31
59

From requests documentation:

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access r.text. You can find out what encoding Requests is using, and change it, using the r.encoding property.

>>> r.encoding
'utf-8'
>>> r.encoding = 'ISO-8859-1'

Check the encoding requests used for your page, and if it's not the right one - try to force it to be the one you need.

Regarding the differences between requests and urllib.urlopen - they probably use different ways to guess the encoding. Thats all.

Dekel
  • 60,707
  • 10
  • 101
  • 129
  • 1
    Link not working. This is the new one: https://requests.readthedocs.io/en/latest/user/quickstart/#response-content – Michael H. Aug 03 '22 at 07:40
31

After getting response, take response.content instead of response.text and that will be of encoding utf-8.

response = requests.get(download_link, auth=(myUsername, myPassword),  headers={'User-Agent': 'Mozilla'})
print (response.encoding)
if response.status_code is 200:
    body = response.content
else:
    print ("Unable to get response with Code : %d " % (response.status_code))
Hari_pb
  • 7,088
  • 3
  • 45
  • 53
25

The default assumed content encoding for text/html is ISO-8859-1 aka Latin-1 :( See RFC-2854. UTF-8 was too young to become the default, it was born in 1993, about the same time as HTML and HTTP.

Use .content to access the byte stream, or .text to access the decoded Unicode stream. If the HTTP server does not care about the correct encoding, the value of .text may be off.

glhr
  • 4,439
  • 1
  • 15
  • 26
9000
  • 39,899
  • 9
  • 66
  • 104
  • In my case, this was the answer. The answer given by @bubak worked, but it has bad performance for all the transformations. `content` is the key – Adolfo F. Ibarra Landeo Dec 13 '19 at 16:34
  • I was able to do something like this, to make sure if we could not convert to what I wanted, we at least got something. I also found that the content was much faster to process then setting the encoding. try: lContent = lResponse.content.decode('UTF-8') except: lContent = lResponse.content.decode(lResponse.apparent_encoding) – Brian S Jan 22 '20 at 13:07
  • Using `.content` did the trick/worked for me +1 – Marco Aurelio Fernandez Reyes Aug 25 '22 at 13:57