0

I am trying to get a list of the domains from the first 100 results:

For example: abc.com/xxxx/dddd the domain should be: abc.com

I am using this following code:

import time
from bs4 import BeautifulSoup
import requests
search=input("What do you want to ask: ")
search=search.replace(" ","+")
link="https://www.google.com/search?q="+search
print(link)
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
source=requests.get(link, headers=headers).text
soup=BeautifulSoup(source,"html.parser")

soup=BeautifulSoup(source,"html.parser")

However, I don`t know how to select only the domains, nor how to specify 100 results.

When I write the soup.text I only get:

'te - Pesquisa Google(function(){window.google={kEI:\'jsCaXM3AHM6g5OUP4eyT2A0\',kEXPI:\'31\',authuser:0,kscs:\'c9c918f0_jsCaXM3AHM6g5OUP4eyT2A0\',kGL:\'BR\'};google.sn=\'web\';google.kHL=\'pt-BR\';})();(function(){google.lc=[];google.li=0;google.getEI=function(a){for(var b;a&&(!a.getAttribute||!(b=a.getAttribute("eid")));)a=a.parentNode;return b||google.kEI};google.getLEI=function(a){for(var b=null;a&&(!a.getAttribute||!(b=a.getAttribute("leid")));)a=a.parentNode;return b};google.https=function(){return"https:"==window.location.protocol};google.ml=function(){return null};google.time=function()
Cesar
  • 575
  • 3
  • 16
  • Possible duplicate of [google search with python requests library](https://stackoverflow.com/questions/22623798/google-search-with-python-requests-library) – Life is complex Mar 27 '19 at 02:11
  • @Lifeiscomplex It isn't. The OP is requesting for guidance related to scraping using bs4 while the other is looking to navigate the DOM using requests – kerwei Mar 27 '19 at 02:18

1 Answers1

2

To get 100 results

You have to scrape by every page until it has 100 results. Assume that keyword beautiful+girls URL to scrap is for page 2 like this https://www.google.com/search?q=beautiful+girls&start=10

To get domain only

First, you have to get all div with class 'srg' (after view source, I see all links are in this)

srg_divs = soup.findAll("div", {"class": "srg"})

Then you will find all a tags

out = ''
for div in srg_divs:
    links = div.find_all('a', href=True)
    for a in links:
        # url to domain
        parsed_uri = urlparse(a['href'])
        domain = '{uri.netloc}'.format(uri=parsed_uri)
        # exclude googleusercontent.com
        if 'googleusercontent' in domain or domain == '':
            continue
        out += domain + '\n'
Trung NT Nguyen
  • 403
  • 2
  • 14