4

We are trying to chose between technologies at my work. And I thought I'd run a benchmark using both libraries (aiohttp and requests).

I want it to be as fair / unbiased as possible, and would love a look from the community into this.

So this is my current code :

import asyncio as aio
import aiohttp
import requests
import time

TEST_URL = "https://a-domain-i-can-use.tld"

def requests_fetch_url(url):
    with requests.Session() as session:
        with session.get(url) as resp:
            html = resp.text

async def aio_fetch_url(url):
    async with aiohttp.ClientSession() as session:
        async with session.get(url) as resp:
            html = await resp.text()

t_start_1 = time.time()
for i in range(10):
    [requests_fetch_url(TEST_URL) for i in range(16)]
t_end_1 = time.time()
print("using requests : %.2fs" % (t_end_1-t_start_1))

t_start_2 = time.time()
for i in range(10):
    aio.get_event_loop().run_until_complete(aio.gather(
        *[aio_fetch_url(TEST_URL) for i in range(16)]
    ))
t_end_2 = time.time()
print("using aiohttp : %.2fs" % (t_end_2-t_start_2))

ratio = (t_end_1-t_start_1)/(t_end_2-t_start_2)
print("ratio : %.2f" % ratio)

So is that biased ? Are there any ways to improve it to be more reliable ? Should I also monitor CPU and/or RAM usage ? anything else I'm missing ? Are there ways to improve this ?

Loïc
  • 11,804
  • 1
  • 31
  • 49
  • 1
    You should try `timeit` to be more reliable in your tests. It will automated most of the time calculation. – TwistedSim Apr 25 '18 at 20:40
  • thank you for your input @TwistedSim I'll give it a look. – Loïc Apr 25 '18 at 20:43
  • 1
    your question is good but I think it would be better if you post this on https://codereview.stackexchange.com/.. correct me if i'm wrong – Adarsh Patel Mar 27 '19 at 13:34
  • 1
    According to [the docs](https://docs.aiohttp.org/en/latest/http_request_lifecycle.html) reopening a session on every request is not a good pattern for neither aiohttp nor requests. – bluenote10 Feb 07 '21 at 10:11

0 Answers0