How do I make a Bash script that will copy all links (non-download website). The function is only to get all the links and then save it in a txt file.
I've tried this code:
wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:'
Example: there are download links within a website (for example, dlink.com), so I just want to copy all words that contain dlink.com and save it into a txt file.
I've searched around using Google, and I found none of it useful.