3

We can download multiple links using wget -i file_name where file_name is the file that contains all URLs we have to download.

I have 3 URLs in a file for example:

google.com
facebook.com
twitter.com

I request these URLs using wget -i file_name. But, how can we specify files names to store the result?

For example,we have to store result from google.com, facebook.com, twitter.com as response1, response2, response3 respectively. Thanks in advance.

Vipin Yadav
  • 1,616
  • 1
  • 14
  • 23
silpa
  • 57
  • 5
  • `man curl` and look at `-o` option? And sorry, but IMHO, this doesn't qualify as a programming Q. In the future, please post at http://superuser.com . Please read https://stackoverflow.com/help/on-topic , https://stackoverflow.com/help/how-to-ask , https://stackoverflow.com/help/dont-ask , https://stackoverflow.com/help/mcve and take the [tour](https://stackoverflow.com/tour) before posting more Qs here. Good luck. – shellter Dec 14 '17 at 14:00

1 Answers1

3

I found similar question here Use the -O file option.

E.g.

wget google.com
...
16:07:52 (538.47 MB/s) - `index.html' saved [10728]

vs.

wget -O foo.html google.com
...
16:08:00 (1.57 MB/s) - `foo.html' saved [10728]

Referring above I come up with a solution to write a simple shell script.

It's simply like executing wget -O <URL> <filename> multiple times.

  1. Create a file download_file.sh with contents like this

    #!/bin/bash
    wget https://www.google.com -O google_file
    wget https://www.facebook.com -O fb_file
    wget https://www.twitter.com -O twitter_file  
    
  2. Make the file executable

    chmod +x download_file.sh
    
  3. Run the file

    ./download_file.sh
    

All URL will be downloaded with filename defined in the download_file.sh. Also, you can tweak the shell script as your requirement like providing URLs from another file as the argument of this file.

Vipin Yadav
  • 1,616
  • 1
  • 14
  • 23