0

Basically I am trying to write a script which will grab certain files on a webpage and download it to specific folders.

I am able to complete this with most of the webpages using Python, Selenium, and FirefoxPreferences.

However, when I try to grab off of this specific webpage, due to credential rights, I can't parse the html.

Here is the question. I am able to grab the download link for the file, and I can open a browser and have the open/save widget pop up. I can't however click or actually down the file any further. I have already set the Firefox Preferences to not show this widget, to download automatically, and to a specific file. This is ignored for some reason, and I am still left staring at the open browser, with the save/open widget.

How do I use the download link of a file to download to specific folder using Python... Selenium... any other related CS tricks. I don't want to build a bot to click the save for me. Too "hacky" and this is a company project.

Thanks!

Josh
  • 107
  • 2
  • 11

2 Answers2

0

you can try urllib

urllib.urlretrieve(<url>,<filename_with_path>)
Murali
  • 76
  • 6
  • I would, but with urlretrieve, I need the permissions for that page, which Python doesn't send. . – Josh Aug 12 '15 at 14:21
  • `urllib.urlretrieve` would only work when password is not required. If you have authentication to download a file, then you might want to give a try to `mechanize` module. You can replicate the steps you do on the website, like open url, login and navigate, etc. – Murali Aug 12 '15 at 17:59
  • in Python 3, you have to import `urllib.request` and use `urllib.request.urlretrieve` – Nathan Dai Mar 19 '23 at 06:16
0
import urllib

testfile = urllib.URLopener()
testfile.retrieve("http://randomsite.com/file.gz", "file.gz")

The good way to download a file with python. Refer Here

Community
  • 1
  • 1
hemnath mouli
  • 2,617
  • 2
  • 17
  • 35