I'm attempting to make a file that downloads a large list of files from a website. The download function I've got is this:
def download(files, downloadDirectory, rootDirectory):
import urllib.request
for file in files:
urllib.request.urlretrieve(rootDirectory + file, downloadDirectory + file)
How can I go about adapting this to download the files in parallel so that the speed is maximized? Is it best to use multithreading, or is there something else I can use? Other questions that answer this don't seem to be for python 3.4.3.