Either curl
or wget
can be used in this case. All 3 of these commands do the same thing, downloading the file at http://path/to/file.txt and saving it locally into "my_file.txt".
Note that in all commands below, I also recommend using the -L
or --location
option with curl
in order to follow HTML 302 redirects to the new location of the file, if it has moved. wget
requires no additional options to do this, as it does this automatically.
# save the file locally as my_file.txt
wget http://path/to/file.txt -O my_file.txt # my favorite--it has a progress bar
curl -L http://path/to/file.txt -o my_file.txt
curl -L http://path/to/file.txt > my_file.txt
Alternatively, to save the file as the same name locally as it is remotely, use either wget
by itself, or curl
with -O
or --remote-name
:
# save the file locally as file.txt
wget http://path/to/file.txt
curl -LO http://path/to/file.txt
curl -L --remote-name http://path/to/file.txt
Notice that the -O
in all of the commands above is the capital letter "O".
The nice thing about the wget
command is it shows a nice progress bar.
You can prove the files downloaded by each of the sets of 3 techniques above are exactly identical by comparing their sha512 hashes. Running sha512sum my_file.txt
after running each of the commands above, and comparing the results, reveals all 3 files to have the exact same sha hashes (sha sums), meaning the files are exactly identical, byte-for-byte.
References
- I learned about the
-L
option with curl
here: Is there a way to follow redirects with command line cURL?
See also: How to capture cURL output to a file?