Been relying on requests' ability to auto-decompress responses for a long time now, as per https://docs.python-requests.org/en/latest/community/faq/#encoded-data
However, I'm having a new problem in that it seems that when urllib3 v2 or greater is installed, it's causing gzip server responses to not be automatically decompressed? Currently testing with the latest urllib3 -- v2.0.4
Here are the server response headers:
{'Access-Control-Allow-Origin': '*', 'Content-Encoding': 'gzip', 'Content-Length': '438882', 'Content-Type': 'application/json', 'Date': 'Thu, 28 May 2020 13:58:44 GMT', 'Server': 'Werkzeug/0.16.1 Python/3.7.4', 'Set-Cookie': '00JjTgL6zvaENlsRPoILCbSuUjTiiv7B9hLvy41-QF=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; HttpOnly; Path=/, session=eyJvaWRjX2NzcmZfdG9rZW4iOiJ6QUZ0ZF9TNml5M0NqNnR3TWYyWjR2YXZKME5jM05vdyJ9.Xs_DlA.ijqNu8eoHFmkUmgn1-2HchDk-P0; HttpOnly; Path=/', 'Vary': 'Accept-Encoding, Cookie'}
However, response.json()
blows up.
If I do json.loads(zlib.decompress(response.content, 16+zlib.MAX_WBITS))
-- I successfully load the json into python.
Pinning to 'urllib3<2', and response.json()
works again -- data is automatically decompressed and loaded into python.
The server response is from a stored vcrpy object, so it identical in either case. The only difference is the urllib3 version.
Did something change with expected usage for urllib3 v2?