I'm learning about NLP, and to do this I'm scraping an Amazon book-review using Scrapy. I've extracted the fields that I want, and am outputting them to a Json file format. When this file is loaded as a df, each field is recorded as a list rather than an individual line-per-line format. How can I split this list so that the df will have a row for each item, rather than all item entries being recorded in seperate lists? Code:
import scrapy
class ReviewspiderSpider(scrapy.Spider):
name = 'reviewspider'
allowed_domains = ['amazon.co.uk']
start_urls = ['https://www.amazon.com/Gone-Girl-Gillian-Flynn/product-reviews/0307588378/ref=cm_cr_othr_d_paging_btm_1?ie=UTF8&reviewerType=all_reviews&pageNumber=1']
def parse(self, response):
users = response.xpath('//a[contains(@data-hook, "review-author")]/text()').extract()
titles = response.xpath('//a[contains(@data-hook, "review-title")]/text()').extract()
dates = response.xpath('//span[contains(@data-hook, "review-date")]/text()').extract()
found_helpful = response.xpath('//span[contains(@data-hook, "helpful-vote-statement")]/text()').extract()
rating = response.xpath('//i[contains(@data-hook, "review-star-rating")]/span[contains(@class, "a-icon-alt")]/text()').extract()
content = response.xpath('//span[contains(@data-hook, "review-body")]/text()').extract()
yield {
'users' : users.extract(),
'titles' : titles.extract(),
'dates' : dates.extract(),
'found_helpful' : found_helpful.extract(),
'rating' : rating.extract(),
'content' : content.extract()
}
Sample Output:
users = ['Lauren', 'James'...'John']
dates = ['on September 28, 2017', 'on December 26, 2017'...'on November 17, 2016']
rating = ['5.0 out of 5 stars', '2.0 out of 5 stars'...'5.0 out of 5 stars']
Desired Output:
index 1: [users='Lauren', dates='on September 28, 2017', rating='5.0 out of 5 stars']
index 2: [users='James', dates='On December 26, 2017', rating='5.0 out of 5 stars']
...
I know that the Pipeline related to the spider should probably be edited to achieve this, however I have limited Python knowledge and couldn't understand the Scrapy documentation. I've also tried the solutions from here and here, however I don't know enough to be able to consolidate the answers with my own code. Any help would be very appreciated.