4

Hello I am using the requests module and I would like to improve the speed because I have many urls so I suppose I can use threading to have a better speed. Here is my code :

import requests

urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
for url in urls:
    reponse = requests.get(url)
    value = reponse.json()

But I don't know how to use requests with threading ...

Could you help me please ?

Thank you !

Paul Harris
  • 149
  • 2
  • 6

2 Answers2

1

Just to add from bashrc, you can also use it with requests. You don't need to use urllib.request method.

it would be something like :

from concurrent import futures

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']
with futures.ThreadPoolExecutor(max_workers=5) as executor: ## you can increase the amount of workers, it would increase the amount of thread created
    res = executor.map(requests.get,URLS)
responses = list(res) ## the future is returning a generator. You may want to turn it to list.

What I like to do however, it is to create a function that returns directly the json from the response (or the text if you want to scrape). And use that function in the threadpool

import requests
from concurrent import futures
URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

def getData(url):
   res = requests.get(url)
   try:
       return res.json()
   except:
       return res.text
with futures.ThreadPoolExecutor(max_workers=5) as executor:
    res = executor.map(getData,URLS)
responses = list(res) ## your list will already be pre-formated

Pitchkrak
  • 340
  • 1
  • 3
  • 11
0

You can use concurrent module.

    pool = concurrent.futures.thread.ThreadPoolExecutor(max_workers=DEFAULT_NUMBER_OF_THREADS)
    pool.map(lambda x : requests.get(x), urls)

This allows controlled concurrency.

This is a direct example from the threadpool documentation

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))
bashrc
  • 4,725
  • 1
  • 22
  • 49