I have a list of file URLs which are download links. I have written Python code to download the files to my computer. Here's the problem, there are about 500 files in the list and Chrome becomes unresponsive after downloading about 50 of these files. My initial goal was to upload all the files that I have downloaded to a Bucket in s3. Is there a way to make the files go to S3 directly? Here is what I have written so far:
import requests
from itertools import chain
import webbrowser
url = "<my_url>"
username = "<my_username>"
password = "<my_password>"
headers = {"Content-Type":"application/xml","Accept":"*/*"}
response = requests.get(url, auth=(username, password), headers = headers)
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:', response.json())
exit()
data = response.json()
values = list(chain.from_iterable(data.values()))
links = [lis['download_link'] for lis in values]
for item in links:
webbrowser.open(item)