When I write parse()
function, can I yield both a request and items for one single page?
I want to extract some data in page A and then store the data in database, and extract links to be followed (this can be done by rule in CrawlSpider).
I call the links pages of A pages is B pages, so I can write another parse_item() to extract data from B pages, but I want to extract some links in B pages, so I can only use rule to extract links? how to tackle with the duplicate URLs in Scrapy?