This Python Script makes GET Requests to URL´s loaded from websites.txt file. It will check then the Response for a "KEYWORD". If it finds the Key it will save it in "WorkingSites.txt".
Everything is working perfect BUT its to slow because it checks only one url at the same time. What is the best and easiest method to check for example 10 URLS at the same time ?
can you please provide me an example with my script below
Thanks
import requests
import sys
if len(sys.argv) != 2:
print "\n\033[34;1m[*]\033[0m python " + sys.argv[0] \
+ ' websites.txt '
exit(0)
targetfile = open(sys.argv[1], 'r')
while True:
success = open('WorkingSites.txt', 'a')
host = targetfile.readline().replace('\n', '')
if not host:
break
if not host.startswith('http'):
host = 'http://' + host
print '\033[34;1m[*]\033[0m Check : ' + host
try:
r = requests.request('get', host, timeout=5,
headers={'Content-Type': 'application/x-www-form-urlencoded'
,
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3163.100 Safari/537.36'
})
text = 'KEYWORD'
except:
print '\033[31;1m[-]\033[0m Failed : No Response\n'
pass
continue
if text in r.text:
print '\033[32;1m[+]\033[0m success : ' + host + '\n'
success.write(host + '\n')
else:
print '\033[31;1m[-]\033[0m Failed : ' + host + '\n'
print "\033[34;1m[*]\033[0m Output Saved On : WorkingSites.txt"