57

I'm trying to build a tool for testing the delay of my internet connection, more specifically web site load times. I thought of using the python requests module for the loading part.

Problem is, it's got no built-in functionality to measure the time it took to get the full response. For this I thought I would use the timeit module.

What I'm not sure about is that if I run timeit like so:

t = timeit.Timer("requests.get('http://www.google.com')", "import requests")

I'm I really measuring the time it took the response to arrive or is it the time it takes for the request to be built, sent, received, etc? I'm guessing I could maybe disregard that excecution time since I'm testing networks with very long delays (~700ms)?

Is there a better way to do this programatically?

Undo
  • 25,519
  • 37
  • 106
  • 129
cookM
  • 953
  • 3
  • 8
  • 11

3 Answers3

193

There is such functionality in latest version of requests:

https://requests.readthedocs.io/en/latest/api/?highlight=elapsed#requests.Response.elapsed

For example:

requests.get("http://127.0.0.1").elapsed.total_seconds()
Rufat
  • 536
  • 1
  • 8
  • 25
TJL
  • 6,330
  • 8
  • 34
  • 36
  • 30
    to get the response time in seconds: `requests.get("http://127.0.0.1").elapsed.total_seconds()` – Michael Osl Apr 04 '14 at 13:51
  • 3
    To add to Micael Osl's comment: `total_seconds()` is a decimal number which seems to have microsecond precision. – Luc May 13 '16 at 21:51
  • 10
    Do not use this for profiling and optimizing your code on the client side. It only measures the server's response time (which was the OPs question). _The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the stream keyword argument._ - the docs. – ChaimG Jun 29 '16 at 05:36
  • 3
    @GvS what about `Python 3` and `urllib3`? – Heinz Oct 16 '17 at 19:10
35

As for your question, it should be the total time for

  1. time to create the request object
  2. Send request
  3. Receive response
  4. Parse response (See comment from Thomas Orozco )

Other ways to measure a single request load time is to use urllib:

nf = urllib.urlopen(url)
start = time.time()
page = nf.read()
end = time.time()
nf.close()
# end - start gives you the page load time
Bhargav Rao
  • 50,140
  • 28
  • 121
  • 140
pyfunc
  • 65,343
  • 15
  • 148
  • 136
  • 7
    + 4. parse the HTTP response – Thomas Orozco Jun 22 '12 at 16:06
  • 1
    That looks really nice, but looking at one of the examples i see `1. start_timer = time.time() 2. Open Browser + Read Response 3. latency = time.time() - start_timer` Would that be kind of the same problem? – cookM Jun 22 '12 at 16:19
  • @cookM: I did not see it as problem but a real time experience of what the request latency will be. In fact it averages over many requests which will be closer to a realistic time. – pyfunc Jun 22 '12 at 16:23
  • 1
    #cookM: The wiki has more details on profiling load limes: http://code.google.com/p/multi-mechanize/wiki/AdvancedScripts – pyfunc Jun 22 '12 at 16:24
  • @pyfunc Just saw your edit, I think that snippet is just what I was looking for. I'm not that familiar with urllib but I'm guessing that when I issue `nf.read()` what I'm doing is sending the request and getting it back right? – cookM Jun 22 '12 at 16:25
  • Nice! Seems like the wiki has a lot of useful information (duh). I'll give multi-mechanize a try. Thanks a lot for your help. – cookM Jun 22 '12 at 16:31
  • @cookM: Yes when you do nf.read() you are doing all the four mentioned above but for realistic load profile, I would suggest you give multi-mechanize a try. It is bit involved than the snippet but has real returns. – pyfunc Jun 22 '12 at 16:39
  • `urlopen` seems to block until the headers come, so I'd put the start assignment before. – Janus Troelsen Mar 23 '13 at 19:00
  • I would say that this says very little about a website. A website isn't just the request response, but all the subsequent html and ajax requests.... What am I missing? –  Aug 17 '14 at 08:56
  • Hi @pyfunc I just read your answer. It is possible to detail time for each of your point? Like how much time does it take to send the request, how much time does it take to for the server the process it, etc. – Gregorius Edwadr Apr 05 '17 at 07:11
  • That link seems to be infected. Takes you to a website advertising: "How, Where And Whether To Buy Instagram Followers" – agent nate May 26 '17 at 22:25
1

response.elapsed returns a timedelta object with the time elapsed from sending the request to the arrival of the response. It is often used to stop the connection after a certain point of time is elapsed.

# import requests module 
import requests 
  
# Making a get request 
response = requests.get('http://stackoverflow.com/') 
  
# print response 
print(response) 
  
# print elapsed time 
print(response.elapsed)

output:

<Response [200]>
0:00:00.343720
Milovan Tomašević
  • 6,823
  • 1
  • 50
  • 42