I'm trying to get some data from thousands of urls by using asyncio. Here is a brief overview of the design:
- Fill up a
Queue
in one go with a bunch of urls using a singleProducer
- Spawn a bunch of
Consumers
- Each
Consumer
keeps asynchronously extracting urls from theQueue
and sendingGET
requests - Do some postprocessing on the result
- Combine all processed results and return
Problems: asyncio
almost never shows if anything is wrong, it just silently hangs with no errors. I put print
statements everywhere to detect problems myself, but it didn't help much.
Depending on the number of input urls and number of consumers or limits i might get these errors:
Task was destroyed but it is pending!
task exception was never retrieved future: <Task finished coro=<consumer()
aiohttp.client_exceptions.ServerDisconnectedError
aiohttp.client_exceptions.ClientOSError: [WinError 10053] An established connection was aborted by the software in your host machine
Questions: how to detect and handle exceptions in asyncio
? how to retry without disrupting the Queue
?
Bellow is my code that i compiled looking at various examples of async code. Currently, there's in an intentional error at the end of a def get_video_title
function. When run, nothing shows up.
import asyncio
import aiohttp
import json
import re
import nest_asyncio
nest_asyncio.apply() # jupyter notebook throws errors without this
user_agent = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36"
def get_video_title(data):
match = re.search(r'window\[["\']ytInitialPlayerResponse["\']\]\s*=\s*(.*)', data)
string = match[1].strip()[:-1]
result = json.loads(string)
return result['videoDetails']['TEST_ERROR'] # <---- should be 'title'
async def fetch(session, url, c):
async with session.get(url, headers={"user-agent": user_agent}, raise_for_status=True, timeout=60) as r:
print('---------Fetching', c)
if r.status != 200:
r.raise_for_status()
return await r.text()
async def consumer(queue, session, responses):
while True:
try:
i, url = await queue.get()
print("Fetching from a queue", i)
html_page = await fetch(session, url, i)
print('+++Processing', i)
result = get_video_title(html_page) # should raise an error here!
responses.append(result)
queue.task_done()
print('+++Task Done', i)
except (aiohttp.http_exceptions.HttpProcessingError, asyncio.TimeoutError) as e:
print('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Error', i, type(e))
await asyncio.sleep(1)
queue.task_done()
async def produce(queue, urls):
for i, url in enumerate(urls):
print('Putting in a queue', i)
await queue.put((i, url))
async def run(session, urls, consumer_num):
queue, responses = asyncio.Queue(maxsize=2000), []
print('[Making Consumers]')
consumers = [asyncio.ensure_future(
consumer(queue, session, responses))
for _ in range(consumer_num)]
print('[Making Producer]')
producer = await produce(queue=queue, urls=urls)
print('[Joining queue]')
await queue.join()
print('[Cancelling]')
for consumer_future in consumers:
consumer_future.cancel()
print('[Returning results]')
return responses
async def main(loop, urls):
print('Starting a Session')
async with aiohttp.ClientSession(loop=loop, connector=aiohttp.TCPConnector(limit=300)) as session:
print('Calling main function')
posts = await run(session, urls, 100)
print('Done')
return posts
if __name__ == '__main__':
urls = ['https://www.youtube.com/watch?v=dNQs_Bef_V8'] * 100
loop = asyncio.get_event_loop()
results = loop.run_until_complete(main(loop, urls))