How do I avoid being blocked by Google for querying their search engine via requests? I iterate through a list of dates so that I may get the results for a query like Microsoft Release
for each month in the list.
I am currently changing user agents and adding time.sleep
of 10s
in between requests but I always get blocked. How do I use proxies in conjuction with my approach? Is there a better way to do this?
from bs4 import BeautifulSoup
import requests
http_proxy = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy = "ftp://10.10.1.10:3128"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
page_response = requests.get('https://www.google.com/search?q=Microsoft+Release&rlz=1C1GCEA_enGB779&tbs=cdr:1,cd_min:'+startDate+',cd_max:'+endDate+'&source=inms&tbm=nws&num=150',\
timeout=60, verify=False, headers={'User-Agent': random.choice(user_agents)}, proxies=proxyDict)
soup = BeautifulSoup(page_response.content, 'html.parser')
I then get the following error:
ConnectTimeout: HTTPSConnectionPool(host='www.google.com', port=443): Max retries exceeded with url: /search?q=Microsoft+Release&rlz=1C1GCEA_enGB779&tbs=cdr:1,cd_min:'+startDate+',cd_max:'+endDate+'&source=inms&tbm=nws&num=150 (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x1811499358>, 'Connection to 10.10.1.11 timed out. (connect timeout=60)'))
Any idea how to counter this error and to make it work?