I'd like a sanity check on this Python script. My goal is to input a list of urls and get a byte size, giving me an indicator if the url is good or bad.
import urllib2
import shutil
urls = (LIST OF URLS)
def getUrl(urls):
for url in urls:
file_name = url.replace('https://','').replace('.','_').replace('/','_')
try:
response = urllib2.urlopen(url)
except urllib2.HTTPError, e:
print e.code
except urllib2URLError, e:
print e.args
print urls, len(response.read())
with open(file_name,'wb') as out_file:
shutil.copyfileobj(response, out_file)
getUrl(urls)
The problem I am having is my output looks like:
(LIST OF URLS) 22511
(LIST OF URLS) 56472
(LIST OF URLS) 8717
...
How would I make only one url appear with the byte size?
Is there a better way to get these results?