0

I have many CSV files that I need to get from a URL. I found this reference: How to read a CSV file from a URL with Python?

It does almost the thing I want, but I don't want to go through Python to read the CSV and then have to save it. I just want to directly save the CSV file from the URL to my hard drive.

I have no problem with for loops and cycling through my URLs. It is simply a matter of saving the CSV file.

Community
  • 1
  • 1
bill999
  • 2,147
  • 8
  • 51
  • 103
  • 3
    possible duplicate of [Downloading a picture via urllib and python](http://stackoverflow.com/questions/3042757/downloading-a-picture-via-urllib-and-python) –  Jan 21 '14 at 22:09

1 Answers1

1

If all you want to do is save a csv, then I wouldn't suggest using python at all. In fact this is more of a unix question. Making the assumption here that you're working on some kind of *nix system I would suggest just using wget. For instance:

wget http://someurl/path/to/file.csv

You can run this command directly from python like so:

import subprocess
bashCommand = lambda url, filename: "wget -O %s.csv %s" % (filename, url)
save_locations = {'http://someurl/path/to/file.csv': 'test.csv'}
for url, filename in save_locations.items():
    process = subprocess.Popen(bashCommand(url, filename).split(), stdout=subprocess.PIPE)
    output = process.communicate()[0]
Slater Victoroff
  • 21,376
  • 21
  • 85
  • 144