I wrote Ruby scraper to grab campaign finance data from the California senate and then save each individual as a hash. Here's the code so far:
Here's the main website: http://cal-access.sos.ca.gov/Campaign/Candidates/
here's an example of a candidate page: http://cal-access.sos.ca.gov/Campaign/Committees/Detail.aspx?id=1342974&session=2011&view=received
And here's the github repo incase you want to see my comments in the code: https://github.com/aboutaaron/Baugh-For-Senate-2012/blob/master/final-exam.rb
On to the code...
require 'nokogiri'
require 'open-uri'
campaign_data = Nokogiri::HTML(open('http://cal-access.sos.ca.gov/Campaign/Candidates/'))
class Candidate
def initialize(url)
@url = url
@cal_access_url = "http://cal-access.sos.ca.gov"
@nodes = Nokogiri::HTML(open(@cal_access_url + @url))
end
def get_summary
candidate_page = @nodes
{
:political_party => candidate_page.css('span.hdr15').text,
:current_status => candidate_page.css('td tr:nth-child(2) td:nth-child(2) .txt7')[0].text,
:last_report_date => candidate_page.css('td tr:nth-child(3) td:nth-child(2) .txt7')[0].text,
:reporting_period => candidate_page.css('td tr:nth-child(4) td:nth-child(2) .txt7')[0].text,
:contributions_this_period => candidate_page.css('td tr:nth-child(5) td:nth-child(2) .txt7')[0].text.gsub(/[$,](?=\d)/, ''),
:total_contributions_this_period => candidate_page.css('td tr:nth-child(6) td:nth-child(2) .txt7')[0].text.gsub(/[$,](?=\d)/, ''),
:expenditures_this_period => candidate_page.css('td tr:nth-child(7) td:nth-child(2) .txt7')[0].text.gsub(/[$,](?=\d)/, ''),
:total_expenditures_this_period => candidate_page.css('td tr:nth-child(8) td:nth-child(2) .txt7')[0].text.gsub(/[$,](?=\d)/, ''),
:ending_cash => candidate_page.css('td tr:nth-child(9) td:nth-child(2) .txt7')[0].text.gsub(/[$,](?=\d)/, '')
}
end
def get_contributors
contributions_received = @nodes
grab_contributor_page = @nodes.css("a.sublink6")[0]['href']
contributor_page = Nokogiri::HTML(open(@cal_access_url + grab_contributor_page))
grab_contributions_page = contributor_page.css("a")[25]["href"]
contributions_received = Nokogiri::HTML(open(@cal_access_url + grab_contributions_page))
puts
puts "#{@cal_access_url}" + "#{grab_contributions_page}"
puts
contributions_received.css("table").reduce([]) do |memo, contributors|
begin
memo << {
:name_of_contributor => contributions_received.css("table:nth-child(57) tr:nth-child(2) td:nth-child(1) .txt7").text
}
rescue NoMethodError => e
puts e.message
puts "Error on #{contributors}"
end
memo
end
end
end
campaign_data.css('a.sublink2').each do |candidates|
puts "Just grabbed the page for " + candidates.text
candidate = Candidate.new(candidates["href"])
p candidate.get_summary
end
get_summary
works as planned. get_contributors
stores the first contributor <td>
as planned, but does it 20-plus times. I'm only choosing to grab the name for now until I figure out the multiple printing issue.
The end goal is to have a hash of the contributors with all of their required information and possibly move them into a SQL database/Rails app. But, before, I just want a working scraper.
Any advice or guidance? Sorry if the code isn't super. Super newbie to programming.