Good afternoon. I have a quick question. I am using scrapy; for example, the following code:
class UrlSpider(scrapy.Spider):
name = 'url_spider'
allowed_domains = ['url.com']
start_urls = ['http://url.com/']
def parse(self, response):
for popular_link in response.xpath('//something/a/@href').extract():
absolute_popular_link_url = response.urljoin(popular_link)
yield scrapy.Request(absolute_popular_link_url,callback=self.parse_popular_link)
I would like to do the following: if you get a specific URL or http status code, all spiders have to stop, pause. Next create a request, next receive a response, next analyze the response, and after that start the spiders again, un-pause. If possible, can you please prove a part of the code. Thank you in advance for your assistance.