-1

I'm setting up a database(Pandas Dataframe) to store the news weblinks of the news articles(past week articles) for the list of companies. I have written a python code but the code gets executed for sometime and sometimes not , also it does not produce any error. Since it is not producing any log or error, i am finding difficult to understand the background on this issue.

I tried removing the cache from the browser since i am using Jupyter notebook and i tried with other application like Sypder. I have same problem with the Jupyter notebook and other application


links_output=[]

class Newspapr:
    def __init__(self,term):
        self.term=term
        self.url='https://www.google.com/search?q={0}&safe=active&tbs=qdr:w,sdb:1&tbm=nws&source=lnt&dpr=1'.format(self.term)

    def NewsArticlerun(self):
        response=requests.get(self.url)
        soup=BeautifulSoup(response.text,'html.parser')
        links=soup.select(".r a")

        numOpen = min(5, len(links))
        for i in range(numOpen):
            response_links = "https://www.google.com" + links[i].get("href")
            print(response_links)
            links_output.append({"Weblink":response_links})
        pd.DataFrame.from_dict(links_output)



list_of_companies=["Wipro","Reliance","icici bank","vedanta", "DHFL","yesbank","tata motors","tata steel","IL&FS","Jet airways","apollo tyres","ashok leyland","Larson & Turbo","Mindtree","Infosys","TCS","AxisBank","Mahindra & Mahindra"]

for i in list_of_companies:
    comp_list = str('"'+ i + '"')
    call_code=Newspapr(comp_list)
    call_code.NewsArticlerun()

I expect to print the weblink and as a pandas dataframe

3 Answers3

0

First, naming convention of function is wrong, I have changed it.

You are not returning anything in your function. return it.

def newsArticlerun(self):
    response=requests.get(self.url)
    soup=BeautifulSoup(response.text,'html.parser')
    links=soup.select(".r a")

    numOpen = min(5, len(links))
    for i in range(numOpen):
        response_links = "https://www.google.com" + links[i].get("href")
        print(response_links)
        links_output.append({"Weblink":response_links})
    return pd.DataFrame.from_dict(links_output) # this will return your df

To print the result, add print

for i in list_of_companies:
    comp_list = str('"'+ i + '"')
    call_code=Newspapr(comp_list)
    print(call_code.NewsArticlerun()) # here

Note: Due to this you are not getting results.

<div style="font-size:13px;">
<b>About this page</b><br/><br/>Our systems have detected unusual traffic from your computer network.  This page checks to see if it's really you sending the requests, and not a robot.  <a href="#" onclick="document.getElementById('infoDiv').style.display='block';">Why did this happen?</a><br/><br/>
<div id="infoDiv" style="display:none; background-color:#eee; padding:10px; margin:0 0 15px 0; line-height:1.4em;">
This page appears when Google automatically detects requests coming from your computer network which appear to be in violation of the <a href="//www.google.com/policies/terms/">Terms of Service</a>. The block will expire shortly after those requests stop.  In the meantime, solving the above CAPTCHA will let you continue to use our services.<br/><br/>This traffic may have been sent by malicious software, a browser plug-in, or a script that sends automated requests.  If you share your network connection, ask your administrator for help — a different computer using the same IP address may be responsible.  <a href="//support.google.com/websearch/answer/86640">Learn more</a><br/><br/>Sometimes you may be asked to solve the CAPTCHA if you are using advanced terms that robots are known to use, or sending requests very quickly.
</div>
shaik moeed
  • 5,300
  • 1
  • 18
  • 54
0

I suppose you're triggering Google Search anti-spam countermeasures. Adding a delay to your requests may help.

Edit: As Yu Chen said here, use the official Google API https://developers.google.com/custom-search/docs/tutorial/creatingcse

Edit2: Have a look at this post for an in-depth answer: Programmatically searching google in Python using custom search

Edit3: To make it more useful, you should clarify the nature of your question by prepending the term "Google Search" to its title

Basile
  • 121
  • 6
0

It might be because there's no user-agent specified because the default requests user-agent is python-requests, Google understands it and block a request thus you received a completely different HTML with different selectors. Check what's your user-agent.

Pass user-agent into request headers:

headers = {
    'User-agent':
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
requests.get("YOUR_URL", headers=headers)

If you want to scrape a lot of news results and frequently, one thing you can do is to randomize (rotate) user-agents on each request using for example random.choice() by adding them to the list() and iterating over them. List of user-agents.


Code and full example in the online IDE:


from bs4 import BeautifulSoup
import requests, lxml

headers = {
    "User-Agent":
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}

params = {
    "q": "best potato recipes", # query
    "hl": "en",                 # language 
    "gl": "us",                 # country to search from
    "tbm": "nws",               # news results filter
}

html = requests.get('https://www.google.com/search', headers=headers, params=params)
soup = BeautifulSoup(html.text, 'lxml')

for result in soup.select('.dbsr'):
    title = result.select_one('.nDgy9d').text
    link = result.a['href']
    source = result.select_one('.WF4CUc').text
    snippet = result.select_one('.Y3v8qd').text
    date_published = result.select_one('.WG9SHc span').text
    
    print(f'{title}\n{link}\n{snippet}\n{date_published}\n{source}\n')

    # code to save to DataFrame

------
'''
9 Best Potato Recipes for Sides, Desserts, or Entrées
https://www.themanual.com/food-and-drink/9-best-potato-recipes-for-sides-desserts-or-entrees/
9 Best Potato Recipes for Sides, Desserts, or Entrées · Potato Latkes with 
Sour Cream and Applesauce · Smoked Hasselback Potatoes · Potato Salad.
3 weeks ago
The Manual
...
'''

Alternatively, you can do the same thing by using Google News Results API from SerpApi. It's a paid API with a free plan.

The difference in your case is that you don't figure out why the certain thing doesn't extract properly as they should since it's already done for the end-users, and all that needs to be done is to iterate over structured JSON and get the data you want.

Code to integrate:

import os
from serpapi import GoogleSearch

params = {
  "engine": "google",
  "q": "best potato recipe",
  "tbm": "nws",
  "api_key": os.getenv("API_KEY"),
}

search = GoogleSearch(params)
results = search.get_dict()

for news_result in results["news_results"]:
  print(f"Title: {news_result['title']}\nLink: {news_result['link']}\n")
    
  # code to save to DataFrame


------
'''
Title: 9 Best Potato Recipes for Sides, Desserts, or Entrées
Link: https://www.themanual.com/food-and-drink/9-best-potato-recipes-for-sides-desserts-or-entrees/
...
'''

Disclaimer, I work for SerpApi.

Dmitriy Zub
  • 1,398
  • 8
  • 35