0

For each day last month I was downloading 1800 websites. Some of them were active/some not. The ones which were active had a timestamp and I need to extract for each of the domain.

I did that by using this command

while read -r domain; do
    timestamp=$(curl -L0 --max-time 10 "$domain" | grep -oP '"timeSincePublish":(\d+)' )
    printf "%s\t%s\n" "$domain" "$timestamp" 
done < url.txt > output.csv

But I lost the file because I am stupid - however I'd like to extract again the timestamps but now from the offline files.

Can I edit this script to check from the folder itself? not from a txt file?

Haroldo_OK
  • 6,612
  • 3
  • 43
  • 80
  • `for file in /path/to/folder/*; do …` https://stackoverflow.com/questions/20796200/how-to-iterate-over-files-in-a-directory-with-bash – helb Oct 04 '17 at 11:11
  • Possible duplicate of [How to iterate over files in a directory with Bash?](https://stackoverflow.com/questions/20796200/how-to-iterate-over-files-in-a-directory-with-bash) – helb Oct 04 '17 at 11:12
  • 1
    What do you mean by offline files? If you didn't explicitly save the output from `curl` it's not going to be on your disk. – tripleee Oct 04 '17 at 11:17

0 Answers0