With the help of 'Life is complex' I have managed to scrape data from CNN newswebsite. The data (URLs) extracted from are saved in a .csv file (test1). Note this had been done manually as it was easier to do!
from newspaper import Config
from newspaper import Article
from newspaper import ArticleException
import csv
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Firefox/78.0'
config = Config()
config.browser_user_agent = USER_AGENT
config.request_timeout = 10
with open('test1.csv', 'r') as file:
csv_file = file.readlines()
for url in csv_file:
try:
article = Article(url.strip(), config=config)
article.download()
article.parse()
print(article.title)
article_text = article.text.replace('\n', ' ')
print(article.text)
except ArticleException:
print('***FAILED TO DOWNLOAD***', article.url)
with open('test2.csv', 'a', newline='') as csvfile:
headers = ['article title', 'article text']
writer = csv.DictWriter(csvfile, lineterminator='\n', fieldnames=headers)
writer.writeheader()
writer.writerow({'article title': article.title,
'article text': article.text})
With the code above I manage to scrape the actual news information (title and content) from the URLs and also to export it to a .csv file. Only the issue with the export is, is that it only exports the last title and text (therefore I think it keeps overwriting the info on the first row)
How can I get all the titles and content in the csv file?