There is a page I want to scrape, you can pass it variables in the URL and it generates specific content. All the content is in a giant HTML table.
I am looking for a way to write a script that can go through 180 of these different pages, extract specific information from certain columns in the table, do some math, and then write them to a .csv file. That way I can do further analysis myself on the data.
What is the easiest way to scrape webpages, parse HTML and then store the data to a .csv file?
I have done stuff similar in python and PHP, the parsing of HTML is not the most easiest thing to do, or cleanest. Are there other routes that are easier?