So, I learned how Web Scraping works a few days ago and I was messing around today. I wanted to know how I could test if a page exists/doesn't exist. So, I looked it up and I found Python check if website exists. I'm using the requests
module
and I got this code from the answers:
import requests
request = requests.get('http://www.example.com')
if request.status_code == 200:
print('Web site exists')
else:
print('Web site does not exist')
I tried it out, and since example.com exists, it printed "Web site exists". However, I tried something I was sure wouldn't exist, like examplewwwwwww.com and it gave me this error. Why is it doing this and how can I keep it from printing out an error (and instead saying that the website does not exist)?