I have a problem with my Python script. When I just run my spider scrapy runspider Myspider
it's work but if I run it from the main file I have this error : KeyError: 'driver'
Settings file :
SELENIUM_DRIVER_NAME = 'chrome'
#SELENIUM_DRIVER_EXECUTABLE_PATH = '/home/PATH/OF/FILE/chromedriver'
SELENIUM_DRIVER_ARGUMENTS=['--headless']
DOWNLOADER_MIDDLEWARES = {
'scrapy_selenium.SeleniumMiddleware': 800
}
My spider file :
class MySpider(scrapy.Spider):
name = 'my_spider'
def __init__(self, list_urls, *args, **kwargs):
super(my_spider, self).__init__(*args, **kwargs)
self.urls = list_urls
def start_requests(self):
for url in self.urls:
yield SeleniumRequest(
url = url['link'],
callback = self.parse,
wait_time = 15,
)
and my main file :
import scrapy
import classListUrls
from scrapy.crawler import CrawlerProcess
from dir.spiders import Spider
URL = "example.com"
urls = classListUrls.GenListUrls(URL)
process = CrawlerProcess()
process.crawl(Spider.my_spider, list_urls = urls.list_urls())
process.start()
I don't understand why this error.