I've considered and tried various options, namely: - FormRequests - Passing cookies
Sadly i keep getting stuck on: https://www.marktplaats.nl/cookiewall/?target=https%3A%2F%2Fwww.marktplaats.nl%2F
class MarktplaatsSpider(CrawlSpider):
name = 'MarktplaatsSpidertest'
source = 'Markplaats.nl'
allowed_domains = ['marktplaats.nl']
start_urls = ['https://www.marktplaats.nl/']
rules = [Rule(LinkExtractor(allow=()), callback='parse_item',follow=True)]
def start_request(self):
form_data = {'CookieOptIn':'true'}
request_body = json.dumps(form_data)
yield scrapy.Request('https://www.marktplaats.nl',
method="POST",
body=request_body,
headers={'Content-Type': 'application/json; charset=UTF-8'}, )
def parse_item(self, response):
print(response.url)
item['URL'] = response.url
yield item(source=self.source, URL=item['URL'], hash = get_hash(response.url))
There are several other websites where i come accross the same problem. I simply do not know how my spider can get to the page.
Can anyone help me/point me in the right direction?
Regards,
Durk