So, i have this code that helps me to download all files in URL, but everyday there is a new file, how i can avoid downloading all the files in the url?
for link in soup.select("a[href$='v2.pdf']"):
filename = os.path.join(folder_location,link['href'].split('/')[-1])
with open(filename, 'wb') as f:
f.write(requests.get(urljoin(url,link['href'])).content)