-1

The code below will parse JSON from the URL to retrieve 10 urls and put them in an output.txt file.

import json
import urllib.request

response = urllib.request.urlopen('https://json-test.com/test').read()
jsonResponse = json.loads(response)
jsonResponse = json.loads(response.decode('utf-8'))
for child in jsonResponse['results']:
    print (child['content'], file=open("C:\\Users\\test\\Desktop\\test\\output.txt", "a"))

Now that there are 10 links to csv files in the output.txt , trying to figure out how I can download and save the 10 files. Tried doing doing something like this but not working.

urllib.request.urlretrieve(['content'], "C:\\Users\\test\\Desktop\\test\\test1.csv")  

Even if I get the above working it is just for 1 file, there are 10 file links in the output.txt. Any ideas?

kederrac
  • 16,819
  • 6
  • 32
  • 55
  • _but not working._ What does that mean, exactly? Have you done any research? – AMC Mar 20 '20 at 20:55
  • Does this answer your question? [In Python, given a URL to a text file, what is the simplest way to read the contents of the text file?](https://stackoverflow.com/questions/1393324/in-python-given-a-url-to-a-text-file-what-is-the-simplest-way-to-read-the-cont) – AMC Mar 20 '20 at 20:55

1 Answers1

0

Here is a exhausting guide on how to download files over http.

If the text file contains one link per line, you can iterate through the lines like this:

file = open('path/to/file.ext', 'r')
id = 0
for line in file:
   # ... some regex checking if the text is actually a valid url
   response = urllib.request.urlretrieve(line, 'path/to/file' + str(id) + '.ext')
   id+=1
Papooch
  • 1,372
  • 11
  • 17
  • You should use a context manager, and use `enumerate()` instead of the `id` variable (dangerous name, by the way). – AMC Mar 20 '20 at 20:56
  • I should, yes, but for the sake of simplicity, I didn't wanna introduce new confusing concepts to OP, as it seems they are not at all well versed in Python (it's included in the answer I linked anyway). – Papooch Mar 21 '20 at 14:51