Wikipedia provides all their page views in a hourly text file. (See for instance http://dumps.wikimedia.org/other/pagecounts-raw/2014/2014-01/)
For a project is need to extract keywords and their associated page views for the year 2014. But seeing that one file (representing 1 hour, consequently totalling 24*365 files) is ~80MB. This can be a hard task doing manual.
My questions: 1. Is there any way to download the files automatically? (the files are structured properly this could be helpful)