-1

Hello I have this code using Python which use the requests module :

import requests

url1 = "myurl1" # I do not remember exactly the exact url
reponse1 = requests.get(url1)
temperature1 = reponse1.json()["temperature"]

url2 = "myurl2" # I do not remember exactly the exact url
reponse2 = requests.get(url2)
temperature2 = reponse2.json()["temp"]

url3 = "myurl3" # I do not remember exactly the exact url
reponse3 = requests.get(url3)
temperature3 = reponse3.json()[0]

print(temperature1)
print(temperature2)
print(temperature3)

And actually I have to tell you this is a little bit slow... Have you got a solution to improve the speed of my code ? I thought to use multi threading but I don't know how to use it...

Thank you very much !

Peter Carles
  • 55
  • 2
  • 6
  • 4
    I imagine if you do some searches, there are Q&A's here on SO that have examples of running web requests using multiprocessing, or threading, or asyncio, or cuncurrent.futures. – wwii Aug 02 '19 at 18:04
  • 3
    Related: [What is the fastest way to send 100,000 HTTP requests in Python?](https://stackoverflow.com/questions/2632520/what-is-the-fastest-way-to-send-100-000-http-requests-in-python) ... [A very simple multithreading parallel URL fetching (without queue)](https://stackoverflow.com/questions/16181121/a-very-simple-multithreading-parallel-url-fetching-without-queue) ... and more. – wwii Aug 02 '19 at 18:10
  • 2
    The concurrent.futures docs even has [an example](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example) – wwii Aug 02 '19 at 18:15
  • 2
    [The Requests documentation](https://2.python-requests.org//en/latest/user/advanced/#blocking-or-non-blocking) points to other solutions. – wwii Aug 02 '19 at 18:19
  • 1
    Possible duplicate of [A very simple multithreading parallel URL fetching (without queue)](https://stackoverflow.com/questions/16181121/a-very-simple-multithreading-parallel-url-fetching-without-queue) – Uyghur Lives Matter Aug 02 '19 at 18:59
  • One of the common misconceptions about using e.g. requests to capture data from a website is assuming that trying to do N at once will automagically make everything N* faster. There are all sorts of factors like, for example, how well the server handles multiple parallel request that make this assumption very unlikely to be reality. – DisappointedByUnaccountableMod Aug 02 '19 at 19:24

1 Answers1

0

Try Python executors:

import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from multiprocessing import cpu_count

urls = ['/url1', '/url2', '/url3']
with ThreadPoolExecutor(max_workers=2*cpu_count()) as executor:
    future_to_url = {executor.submit(requests.get, url): url for url in urls}
    for future in as_completed(future_to_url):
        response = future.result()  # TODO: handle exceptions here
        url = future_to_url[future]
        # TODO: do something with that data
freakish
  • 54,167
  • 9
  • 132
  • 169