22

I've been learning a lot of python lately to work on some projects at work.

Currently I need to do some web scraping with google search results. I found several sites that demonstrated how to use ajax google api to search, however after attempting to use it, it appears to no longer be supported. Any suggestions?

I've been searching for quite a while to find a way but can't seem to find any solutions that currently work.

Sunny Patel
  • 7,830
  • 2
  • 31
  • 46
pbell
  • 265
  • 1
  • 4
  • 7
  • You _can_ search with Google without an API, but you're likely to get banned by Google if they suspect you're a bot. Read the TOS, you'll likely have to pay to use their API in any significant way. – Athena Jul 27 '16 at 17:30
  • I researched how to do it without an API, I have to change my header/user-agent info. But even when I do that I still can't get results. If that would work, I'd just put a sleep timer in between each request as to not be viewed as a bot. – pbell Jul 27 '16 at 18:34
  • I have written a google search bot, it works great, but since using a bot directly violates the ToS for Google, I'm not going to post it. Whatever you're trying to do, maybe go through the official APIs. – Athena Jul 27 '16 at 18:45

7 Answers7

13

You can always directly scrape Google results. To do this, you can use the URL https://google.com/search?q=<Query> this will return the top 10 search results.

Then you can use lxml for example to parse the page. Depending on what you use, you can either query the resulting node tree via a CSS-Selector (.r a) or using a XPath-Selector (//h3[@class="r"]/a)

In some cases the resulting URL will redirect to Google. Usually it contains a query-parameter qwhich will contain the actual request URL.

Example code using lxml and requests:

from urllib.parse import urlencode, urlparse, parse_qs

from lxml.html import fromstring
from requests import get

raw = get("https://www.google.com/search?q=StackOverflow").text
page = fromstring(raw)

for result in page.cssselect(".r a"):
    url = result.get("href")
    if url.startswith("/url?"):
        url = parse_qs(urlparse(url).query)['q']
    print(url[0])

A note on google banning your IP: In my experience, google only bans if you start spamming google with search requests. It will respond with a 503 if Google thinks you are bot.

StuxCrystal
  • 846
  • 10
  • 20
  • Thanks, I was able to get something working similar to this. – pbell Jul 28 '16 at 14:39
  • 2
    As of today, this is not working for me. When I view the source and DOM structure of the Google search results page, it looks as if the results are being loaded and rendered in JavaScript which would prevent this sort of naive scraping. Is this working for anyone else? – Lane Rettig Mar 06 '17 at 15:35
  • 1
    @Lane Rettig Works fine. – Billy Jhon Mar 19 '17 at 20:19
  • 2
    Not working for me. `page.cssselect(".r a")` is an empty array. – ZhouW Aug 17 '20 at 12:05
9

Here is another service that can be used for scraping SERPs (https://zenserp.com) It does not require a client and is cheaper.

Here is a python code sample:

import requests

headers = {
    'apikey': '',
}

params = (
    ('q', 'Pied Piper'),
    ('location', 'United States'),
    ('search_engine', 'google.com'),
    ('language', 'English'),
)

response = requests.get('https://app.zenserp.com/api/search', headers=headers, params=params)
  • 1
    I am using the API since 2 months, since it was the only one offering a free plan to start with. Working well & did not have problems so far! – Dominik Kukacka May 28 '19 at 10:59
9

You have 2 options. Building it yourself or using a SERP API.

A SERP API will return the Google search results as a formatted JSON response.

I would recommend a SERP API as it is easier to use, and you don't have to worry about getting blocked by Google.

1. SERP API

I have good experience with the scraperbox serp api.

You can use the following code to call the API. Make sure to replace YOUR_API_TOKEN with your scraperbox API token.

import urllib.parse
import urllib.request
import ssl
import json
ssl._create_default_https_context = ssl._create_unverified_context

# Urlencode the query string
q = urllib.parse.quote_plus("Where can I get the best coffee")

# Create the query URL.
query = "https://api.scraperbox.com/google"
query += "?token=%s" % "YOUR_API_TOKEN"
query += "&q=%s" % q
query += "&proxy_location=gb"

# Call the API.
request = urllib.request.Request(query)

raw_response = urllib.request.urlopen(request).read()
raw_json = raw_response.decode("utf-8")
response = json.loads(raw_json)

# Print the first result title
print(response["organic_results"][0]["title"])

2. Build your own Python scraper

I recently wrote an in-depth blog post on how to scrape search results with Python.

Here is a quick summary.

First you should get the HTML contents of the Google search result page.

import urllib.request

url = 'https://google.com/search?q=Where+can+I+get+the+best+coffee'

# Perform the request
request = urllib.request.Request(url)

# Set a normal User Agent header, otherwise Google will block the request.
request.add_header('User-Agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36')
raw_response = urllib.request.urlopen(request).read()

# Read the repsonse as a utf-8 string
html = raw_response.decode("utf-8")

Then you can use BeautifulSoup to extract the search results. For example, the following code will get all titles.

from bs4 import BeautifulSoup

# The code to get the html contents here.

soup = BeautifulSoup(html, 'html.parser')

# Find all the search result divs
divs = soup.select("#search div.g")
for div in divs:
    # Search for a h3 tag
    results = div.select("h3")

    # Check if we have found a result
    if (len(results) >= 1):

        # Print the title
        h3 = results[0]
        print(h3.get_text())

You can extend this code to also extract the search result urls and descriptions.

Dirk Hoekstra
  • 942
  • 12
  • 16
1

You can also use a third party service like Serp API - I wrote and run this tool - that is a paid Google search engine results API. It solves the issues of being blocked, and you don't have to rent proxies and do the result parsing yourself.

It's easy to integrate with Python:

from lib.google_search_results import GoogleSearchResults

params = {
    "q" : "Coffee",
    "location" : "Austin, Texas, United States",
    "hl" : "en",
    "gl" : "us",
    "google_domain" : "google.com",
    "api_key" : "demo",
}

query = GoogleSearchResults(params)
dictionary_results = query.get_dictionary()

GitHub: https://github.com/serpapi/google-search-results-python

Hartator
  • 5,029
  • 4
  • 43
  • 73
1

Current answers will work but google will ban your for scrapping.

My current solution uses the requests_ip_rotator

import requests
from requests_ip_rotator import ApiGateway
import os

keywords = ['test']


def parse(keyword, session):
    url = f"https://www.google.com/search?q={keyword}"
    response = session.get(url)
    print(response)


if __name__ == '__main__':
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''

    gateway = ApiGateway("https://www.google.com", access_key_id=AWS_ACCESS_KEY_ID,
                         access_key_secret=AWS_SECRET_ACCESS_KEY)
    gateway.start()

    session = requests.Session()
    session.mount("https://www.google.com", gateway)

    for keyword in keywords:
        parse(keyword, session)
    gateway.shutdown()

AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY you can create in AWS console.

This solution allow you to parse 1 million requests (amazon free limit)

Nikolay Pavlin
  • 315
  • 1
  • 9
0

You can also use Serpdog's(https://serpdog.io) Google Search API to scrape Google Search Results in Python

import requests
payload = {'api_key': 'APIKEY', 'q':'coffee' , 'gl':'us'}
resp = requests.get('https://api.serpdog.io/search', params=payload)
print (resp.text)

Docs: https://docs.serpdog.io

Disclaimer: I am the founder of serpdog.io

Darshan
  • 102
  • 1
  • 14
0

Another service that can be utilized for scraping Google Search or other SERP data is SearchApi. You may want to check and test it out as it offers 100 free credits upon registration. It provides a rich JSON data set and includes free request HTML in which you could compare HTML data with results.

Documentation for Google Search API: https://www.searchapi.io/docs/google

Python execution example:

import requests

payload = {'api_key': 'key', 'engine': 'google', 'q':'pizza'}
response = requests.get('https://www.searchapi.io/api/v1/search', params=payload)

print (response.text)

Disclaimer: I work for SearchApi

Sebas
  • 36
  • 3