1

Good day, everyone.

This code is checking availability if the website, but it's loading the whole page, so if I have a list of 100 websites, it will be slow.

My Question is: Is there any way to do it faster?

import requests
user_agent = {'accept': '*/*', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'}

session = requests.Session()
response = session.get("http://google.com", headers=user_agent, timeout=5)
if response.status_code == 200:
    print("Checked & avaliable")
else:
    print("Not avaliable")

Thanks!

Every Help Will Be Appreciated

Hamza Lachi
  • 1,046
  • 7
  • 25
ev47295
  • 19
  • 3
  • 2
    Does this answer your question? [Checking if a website is up via Python](https://stackoverflow.com/questions/1949318/checking-if-a-website-is-up-via-python) – rrauenza Nov 30 '19 at 06:49

2 Answers2

1

You Can Use That:

import urllib.request
print(urllib.request.urlopen("http://www.google.com").getcode())
#output
>>> 200
Hamza Lachi
  • 1,046
  • 7
  • 25
1

This code is checking availability if the website, but it's loading the whole page

To not load the whole page, you can issue HEAD requests, instead of GET, so you will just check the status. See Getting HEAD content with Python Requests

Another way to make it faster, is to issue multiple requests using multiple threads or asyncio ( https://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html ).

warvariuc
  • 57,116
  • 41
  • 173
  • 227
  • Authorization headers will be removed if you get redirected off-host: google.com will be redirected and as a result - Not avaliable – ev47295 Dec 01 '19 at 17:15