For each day last month I was downloading 1800 websites. Some of them were active/some not. The ones which were active had a timestamp and I need to extract for each of the domain.
I did that by using this command
while read -r domain; do
timestamp=$(curl -L0 --max-time 10 "$domain" | grep -oP '"timeSincePublish":(\d+)' )
printf "%s\t%s\n" "$domain" "$timestamp"
done < url.txt > output.csv
But I lost the file because I am stupid - however I'd like to extract again the timestamps but now from the offline files.
Can I edit this script to check from the folder itself? not from a txt file?