0

I am working on a larger code that will display the links of the results for a Google Newspaper search and then analyze those links for certain keywords and context and data. I've gotten everything this one part to work, and now when I try to iterate through the pages of results I come to a problem. I'm not sure how to do this without an API, which I do not know how to use. I just need to be able to iterate through multiple pages of search results so that I can then apply my analysis to it. It seems like there is a simple solution to iterating through the pages of results, but I am not seeing it.

Are there any suggestions on ways to approach this problem? I am somewhat new to Python and have been teaching myself all of these scraping techniques, so I'm not sure if I'm just missing something simple here. I know this may be an issue with Google restricting automated searches, but even pulling in the first 100 or so links would be beneficial. I have seen examples of this from regular Google searches but not from Google Newspaper searches

Here is the body of the code. If there are any lines where you have suggestions, that would be helpful. Thanks in advance!

def get_page_tree(url):
page = requests.get(url=url, verify=False)
return html.fromstring(page.text)

def find_other_news_sources(initial_url):
    forwarding_identifier = '/url?q='
    google_news_search_url = "https://www.google.com/search?hl=en&gl=us&tbm=nws&authuser=0&q=ohio+pay-to-play&oq=ohio+pay-to-play&gs_l=news-cc.3..43j43i53.2737.7014.0.7207.16.6.0.10.10.0.64.327.6.6.0...0.0...1ac.1.NAJRCoza0Ro"
    google_news_search_tree = get_page_tree(url=google_news_search_url)
    other_news_sources_links = [a_link.replace(forwarding_identifier, '').split('&')[0] for a_link in google_news_search_tree.xpath('//a//@href') if forwarding_identifier in a_link]
    return other_news_sources_links

links = find_other_news_sources("https://www.google.com/search?    hl=en&gl=us&tbm=nws&authuser=0&q=ohio+pay-to-play&oq=ohio+pay-to-play&gs_l=news-cc.3..43j43i53.2737.7014.0.7207.16.6.0.10.10.0.64.327.6.6.0...0.0...1ac.1.NAJRCoza0Ro")  

with open('textanalysistest.csv', 'wt') as myfile:
    wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
    for row in links:
        print(row)
Annette
  • 1
  • 2
  • Check out this [replit.com](https://replit.com/@DimitryZub1/Scrape-Google-News-with-Pagination#main.py) I wrote recently to do just that. – Dmitriy Zub Apr 15 '21 at 06:16

1 Answers1

0

I'm looking into building a parser for a site with similar structure to google's (i.e. a bunch of consecutive results pages, each with a table of content of interest).

A combination of the Selenium package (for page-element based site navigation) and BeautifulSoup (for html parsing) seems like it's the weapon of choice for harvesting written content. You may find them useful too, although I have no idea what kinds of defenses google has in place to deter scraping.

A possible implementation for Mozilla Firefox using selenium, beautifulsoup and geckodriver:

from bs4 import BeautifulSoup, SoupStrainer
from bs4.diagnose import diagnose
from os.path import isfile
from time import sleep
import codecs
from selenium import webdriver

def first_page(link):
    """Takes a link, and scrapes the desired tags from the html code"""
    driver = webdriver.Firefox(executable_path = 'C://example/geckodriver.exe')#Specify the appropriate driver for your browser here
    counter=1
    driver.get(link)
    html = driver.page_source
    filter_html_table(html)
    counter +=1
    return driver, counter


def nth_page(driver, counter, max_iter):
    """Takes a driver instance, a counter to keep track of iterations, and max_iter for maximum number of iterations. Looks for a page element matching the current iteration (how you need to program this depends on the html structure of the page you want to scrape), navigates there, and calls mine_page to scrape."""
    while counter <= max_iter:
        pageLink = driver.find_element_by_link_text(str(counter)) #For other strategies to retrieve elements from a page, see the selenium documentation
        pageLink.click()
        scrape_page(driver)
        counter+=1
    else:
        print("Done scraping")
    return


def scrape_page(driver):
    """Takes a driver instance, extracts html from the current page, and calls function to extract tags from html of total page"""
    html = driver.page_source #Get html from page
    filter_html_table(html) #Call function to extract desired html tags
    return


def filter_html_table(html):
    """Takes a full page of html, filters the desired tags using beautifulsoup, calls function to write to file"""
    only_td_tags = SoupStrainer("td")#Specify which tags to keep
    filtered = BeautifulSoup(html, "lxml", parse_only=only_td_tags).prettify() #Specify how to represent content
    write_to_file(filtered) #Function call to store extracted tags in a local file.
    return


def write_to_file(output):
    """Takes the scraped tags, opens a new file if the file does not exist, or appends to existing file, and writes extracted tags to file."""
    fpath = "<path to your output file>"
    if isfile(fpath):
        f = codecs.open(fpath, 'a') #using 'codecs' to avoid problems with utf-8 characters in ASCII format. 
        f.write(output)
        f.close()
    else:
        f = codecs.open(fpath, 'w') #using 'codecs' to avoid problems with utf-8 characters in ASCII format. 
        f.write(output)
        f.close()
    return

After this, it is just a matter of calling:

link = <link to site to scrape>
driver, n_iter = first_page(link)
nth_page(driver, n_iter, 1000) # the 1000 lets us scrape 1000 of the result pages

Note that this script assumes that the result pages you are trying to scrape are sequentially numbered, and those numbers can be retrieved from the scraped page's html using 'find_element_by_link_text'. For other strategies to retrieve elements from a page, see the selenium documentation here.

Also, note that you need to download the packages on which this depends, and the driver that selenium needs in order to talk with your browser (in this case geckodriver, download geckodriver, place it in a folder, and then refer to the executable in 'executable_path')

If you do end up using these packages, it can help to spread out your server requests using the time package (native to python) to avoid exceeding a maximum number of requests allowed to the server off of which you are scraping. I didn't end up needing it for my own project, but see here, second answer to the original question, for an implementation example with the time module used in the fourth code block.

Yeeeeaaaahhh... If someone with higher rep could edit and add some links to beautifulsoup, selenium and time documentations, that would be great, thaaaanks.