I'm trying to use Scrapy on IBM cloud as a function. My __main__.py
is as follows:
class AutoscoutListSpider(scrapy.Spider):
name = "vehicles list"
def __init__(self, params, *args, **kwargs):
super(AutoscoutListSpider, self).__init__(*args, **kwargs)
make = params.get("make", None)
model = params.get("model", None)
mileage = params.get("mileage", None)
init_url = "https://www.autoscout24.be/nl/resultaten?sort=standard&desc=0&ustate=N%2CU&size=20&page=1&cy=B&mmvmd0={0}&mmvmk0={1}&kmto={2}&atype=C&".format(
model, make, mileage)
self.start_urls = [init_url]
def parse(self, response):
# Get total result on list load
init_total_results = int(response.css('.cl-filters-summary-counter::text').extract_first().replace('.', ''))
if init_total_results > 400:
yield {"message": "There are MORE then 400 results"}
else:
yield {"message": "There are LESS then 400 results"}
def main(params):
process = CrawlerProcess()
try:
runner = crawler.CrawlerRunner()
runner.crawl(AutoscoutListSpider, params)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run()
return {"Success ": main_result}
except Exception as e:
return {"Error ": e, "params ": params}
I upload it to the as an IBM function, that is fine.
But the problem is when I run it, in python console
or when I invoke IBM function
, first time it's executed, but if I want to execute it second time I get an error:
{'Error ': ReactorNotRestartable(), 'params ': {'make': '9', 'model': '1624', 'mileage': '2500'}}
It is invoked like this:
IBM:
ibmcloud wsk action invoke --result ascrawler --param make 9 --param model 1624 --param mileage 2500
Python console:
main({"make":"9", "model":"1624", "mileage":"2500"})
With next code I've tried add a possibility to run it multiple times, but without success.
runner = crawler.CrawlerRunner()
runner.crawl(AutoscoutListSpider, params)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run()
Any idea how to solve it?