The sample site I am using is: http://stats.jenkins.io/jenkins-stats/svg/svgs.html
There are a ton of CSVs linked on this site. Now obviously I can go through each link click and download, but I know there is a better way.
I was able to put together the following Python script using BeautifulSoup but all it does is print the soup:
from bs4 import BeautifulSoup
import urllib2
jenkins = "http://stats.jenkins.io/jenkins-stats/svg/svgs.html"
page = urllib2.urlopen(jenkins)
soup = BeautifulSoup(page)
print soup
Below is a sample I get when I print the soup, but I am still missing how to actually download the multiple CSV files from this detail.
<td>
<a alt="201412-jobs.svg" class="info" data-content="<object data='201412-jobs.svg' width='200' type='image/svg+xml'/>" data-original-title="201412-jobs.svg" href="201412-jobs.svg" rel="popover">SVG</a>
<span>/</span>
<a alt="201412-jobs.csv" class="info" href="201412-jobs.csv">CSV</a>
</td>