You simply need to supply a location to urlretrieve
. However pool.map
doesn't appear to support multiple args in functions (Python multiprocessing pool.map for multiple arguments). So, you can refactor, as described there, or use a different multiprocessing primitive, e.g. Process
:
from multiprocessing import Process
from urllib.request import urlretrieve
urls = ["link", "otherlink"]
filenames = ["{}.html".format(i) for i in urls]
args = zip(urls, filenames)
for arg in args:
p = Process(urlretrieve, arg)
p.start()
In the comments you say you only need to download 1 url. In that case it is very easy:
from urllib.request import urlretrieve
urlretrieve("https://yahoo.com", "where_to_save.html")
Then the file will be saved in where_to_save.html
. You can of course provide a full path there, e.g. /where/exactly/to/save.html
.