11

The API should allow arbitrary HTTP get requests containing URLs the user wants scraped, and then Flask should return the results of the scrape.

The following code works for the first http request, but after twisted reactor stops, it won't restart. I may not even be going about this the right way, but I just want to put a RESTful scrapy API up on Heroku, and what I have so far is all I can think of.

Is there a better way to architect this solution? Or how can I allow scrape_it to return without stopping twisted reactor (which can't be started again)?

from flask import Flask
import os
import sys
import json

from n_grams.spiders.n_gram_spider import NGramsSpider

# scrapy api
from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals

app = Flask(__name__)


def scrape_it(url):
    items = []
    def add_item(item):
        items.append(item)

    runner = CrawlerRunner()

    d = runner.crawl(NGramsSpider, [url])
    d.addBoth(lambda _: reactor.stop()) # <<< TROUBLES HERE ???

    dispatcher.connect(add_item, signal=signals.item_passed)

    reactor.run(installSignalHandlers=0) # the script will block here until the crawling is finished


    return items

@app.route('/scrape/<path:url>')
def scrape(url):

    ret = scrape_it(url)

    return json.dumps(ret, ensure_ascii=False, encoding='utf8')


if __name__ == '__main__':
    PORT = os.environ['PORT'] if 'PORT' in os.environ else 8080

    app.run(debug=True, host='0.0.0.0', port=int(PORT))
Josh.F
  • 3,666
  • 2
  • 27
  • 37
  • Could you provide a traceback error or anything? Also why not just remove this line `d.addBoth(lambda _: reactor.stop())` and call reactor.stop after `reactor.run()` I'm assuming it errors out because when it enters the function reactor could be in a started state or a stopped state. It's not guaranteed. – AdriVelaz Sep 24 '15 at 21:16
  • why you want to use Scrapy? there other ways to scrap pages – ahmed Sep 26 '15 at 22:53
  • @ahmed my problem is building an asynch queue for pulling many pages, and then spidering out to links on those pages. What would you recommend for that? – Josh.F Sep 28 '15 at 15:35

2 Answers2

25

I think there is no a good way to create Flask-based API for Scrapy. Flask is not a right tool for that because it is not based on event loop. To make things worse, Twisted reactor (which Scrapy uses) can't be started/stopped more than once in a single thread.

Let's assume there is no problem with Twisted reactor and you can start and stop it. It won't make things much better because your scrape_it function may block for an extended period of time, and so you will need many threads/processes.

I think the way to go is to create an API using async framework like Twisted or Tornado; it will be more efficient than a Flask-based (or Django-based) solution because the API will be able to serve requests while Scrapy is running a spider.

Scrapy is based on Twisted, so using twisted.web or https://github.com/twisted/klein can be more straightforward. But Tornado is also not hard because you can make it use Twisted event loop.

There is a project called ScrapyRT which does something very similar to what you want to implement - it is an HTTP API for Scrapy. ScrapyRT is based on Twisted.

As an examle of Scrapy-Tornado integration check Arachnado - here is an example on how to integrate Scrapy's CrawlerProcess with Tornado's Application.

If you really want Flask-based API then it could make sense to start crawls in separate processes and/or use queue solution like Celery. This way you're loosing most of the Scrapy efficiency; if you go this way you can use requests + BeautifulSoup as well.

Mikhail Korobov
  • 21,908
  • 8
  • 73
  • 65
5

I have been working on similar project last week, it's SEO service API, my workflow was like this:

  • The client send a request to Flask-based server with a URRL to scrape, and a callback url to notify the client when scrapping is done (client here is an other web app)
  • Run Scrapy in the background using Celery. The spider will save the data to the database.
  • The backgound service will notify the client by calling the callback url when the spider is done.
ahmed
  • 5,430
  • 1
  • 20
  • 36
  • could you help me understand the callback url idea? I follow you up to that point, and I'm not sure how to implement that... Thanks btw, this is an awesome idea – Josh.F Sep 29 '15 at 16:53
  • It's how your client will know if the crawler has finished. It's only useful if your client is a website. if you don't use callback, your client will periodically check if the crawler has finished. – ahmed Sep 29 '15 at 18:05