Is there any way to effectively integrate Selenium into Scrapy for it's page rendering capabilities (in order to generate screenshots)?
A lot of solutions I've seen just throw a Scrapy request/response URL at WebDriver after Scrapy's already processed the request, and then just works off that. This creates twice as many requests, fails in many ways (sites requiring logins, sites with dynamic or pseudo-random content, etc.), and invalidates many extensions/middleware.
Is there any "good" way of getting the two to work together? Is there a better way for generating screenshots of the content I'm scraping?