0

I am trying to read from a streaming API where the data is sent using Chunked Transfer Encoding. There can be more than one record per chunk, each record is separated by a CRLF. And the data is always sent using gzip compression. I am trying to get the feed and then do some processing at a time. I have gone through a bunch of stackOverflow resources but couldn't find a way to do it in Python. the iter_content(chunk) size in my case is throwing an exception on the line.

for chunk in api_response.iter_content(chunk_size=1024): 

In Fiddler (which I am using as a Proxy) I can see that data is being constant downloaded and doing a "COMETPeek" in Fiddler, I can actually see some sample json.

Even iter_lines does not work. I have looked at asyncio and aiohttp case mentioned here: Why doesn't requests.get() return? What is the default timeout that requests.get() uses?

but not sure how to do the processing. As you can see I have tried using bunch of python libraries. Sorry some of the code might have some libraries that I later removed from usage as it didn't work out.

I have also looked at the documentation for requests library but couldn't find anything substantial.

As mentioned above, below is a sample code of what I am trying to do. Any pointers to how I should proceed would be highly appreciated.

This is the first time I am trying to read a stream

from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
import requests
import zlib
import json

READ_BLOCK_SIZE = 1024*8

clientID="ClientID"
clientSecret="ClientSecret"

proxies = {
"https": "http://127.0.0.1:8888",
}

client = BackendApplicationClient(client_id=clientID)
oauth = OAuth2Session(client=client)

token = oauth.fetch_token(token_url='https://baseTokenURL/token', client_id=clientID,client_secret=clientSecret,proxies=proxies,verify=False) 

auth_t=token['access_token']
#auth_t = accesstoken.encode("ascii", "ignore")

headers = {
'authorization': "Bearer " + auth_t,
'content-type': "application/json",
'Accept-Encoding': "gzip",
}
dec=zlib.decompressobj(32 + zlib.MAX_WBITS)

try:
    init_res = requests.get('https://BaseStreamURL/api/1/stream/specificStream', headers=headers, allow_redirects=False,proxies=proxies,verify=False)
    if init_res.status_code == 302:
        print(init_res.headers['Location'])
        api_response = requests.get(init_res.headers['Location'], headers=headers, allow_redirects=False,proxies=proxies,verify=False, timeout=20, stream=True,params={"smoothing":"1", "smoothingBucketSize" : "180"})
        if  api_response.status_code == 200:
            #api_response.raw.decode_content = True

            #print(api_response.raw.read(20))
            for chunk in api_response.iter_content(chunk_size=api_response.chunk_size): 
                #Parse the response
    elif init_res.status_code == 200:
        print(init_res.content)
except Exception as ce:
    print(ce)

UPDATE I am looking at this now: https://aiohttp.readthedocs.io/en/v0.20.0/client.html

Would that be the way to go?

Saugat Mukherjee
  • 778
  • 8
  • 32

2 Answers2

2

Just in case someone finds this useful. I have found a way to stream from the api through python using aiohttp. Below is the skeleton. Remember it is just a skeleton and it works by continuously showing me results. If someone has a better way of doing it- I am all ears and eyes, since this is the first time I am trying to catch a stream.

async def fetch(session, url, headers):
    with async_timeout.timeout(None):
        async with session.get(init_res.headers['Location'], headers=headers, proxy="http://127.0.0.1:8888", allow_redirects=False,timeout=None) as r:
            while True:
                chunk=await r.content.read(1024*3)
                if not chunk:
                    break                    
                print(chunk)

async def main(url, headers):
    async with aiohttp.ClientSession() as session:
        html = await fetch(session, url,headers)

In the caller

try:
    init_res = requests.get('https://BaseStreamURL/api/1/stream/specificStream', headers=headers, allow_redirects=False,proxies=proxies,verify=False)
    if init_res.status_code == 302:
        loc=init_res.headers['Location']
        loop = asyncio.get_event_loop()
        loop.run_until_complete(main(loc, headers=headers))
    elif init_res.status_code == 200:
        print(init_res.content)
except Exception as ce:
    print(ce)
Saugat Mukherjee
  • 778
  • 8
  • 32
0

I have achieved the above as following bits and pieces from stack overflow answers Below worked for me.

 MAX_REDIRECTS =1000
        def get_data(url, **kwargs):
            import requests
            kwargs.setdefault('allow_redirects', False)
            for i in range(0, MAX_REDIRECTS):
                response = requests.get(url, **kwargs)
#check for response codes to check if redirects happedned
                if response.status_code == requests.codes.moved or \
                   response.status_code == requests.codes.found:
                    if 'Location' in response.headers:
                        url = response.headers['Location']
                        content_type_header = response.headers.get('content_type')
                        continue
                    else:
                        print ("problem reading")
                return response

call the above function in your line

init_res = requests.get('https://BaseStreamURL/api/1/stream/specificStream', headers=headers, allow_redirects=False,proxies=proxies,verify=False)

to

init_res = get_data('https://BaseStreamURL/api/1/stream/specificStream',stream=True, headers=headers,params=payload)
Dharman
  • 30,962
  • 25
  • 85
  • 135
kumarm
  • 79
  • 3
  • 15