86

I'm new to Scrapy and I'm looking for a way to run it from a Python script. I found 2 sources that explain this:

http://tryolabs.com/Blog/2011/09/27/calling-scrapy-python-script/

http://snipplr.com/view/67006/using-scrapy-from-a-script/

I can't figure out where I should put my spider code and how to call it from the main function. Please help. This is the example code:

# This snippet can be used to run scrapy spiders independent of scrapyd or the scrapy command line tool and use it from a script. 
# 
# The multiprocessing library is used in order to work around a bug in Twisted, in which you cannot restart an already running reactor or in this case a scrapy instance.
# 
# [Here](http://groups.google.com/group/scrapy-users/browse_thread/thread/f332fc5b749d401a) is the mailing-list discussion for this snippet. 

#!/usr/bin/python
import os
os.environ.setdefault('SCRAPY_SETTINGS_MODULE', 'project.settings') #Must be at the top before other imports

from scrapy import log, signals, project
from scrapy.xlib.pydispatch import dispatcher
from scrapy.conf import settings
from scrapy.crawler import CrawlerProcess
from multiprocessing import Process, Queue

class CrawlerScript():

    def __init__(self):
        self.crawler = CrawlerProcess(settings)
        if not hasattr(project, 'crawler'):
            self.crawler.install()
        self.crawler.configure()
        self.items = []
        dispatcher.connect(self._item_passed, signals.item_passed)

    def _item_passed(self, item):
        self.items.append(item)

    def _crawl(self, queue, spider_name):
        spider = self.crawler.spiders.create(spider_name)
        if spider:
            self.crawler.queue.append_spider(spider)
        self.crawler.start()
        self.crawler.stop()
        queue.put(self.items)

    def crawl(self, spider):
        queue = Queue()
        p = Process(target=self._crawl, args=(queue, spider,))
        p.start()
        p.join()
        return queue.get(True)

# Usage
if __name__ == "__main__":
    log.start()

    """
    This example runs spider1 and then spider2 three times. 
    """
    items = list()
    crawler = CrawlerScript()
    items.append(crawler.crawl('spider1'))
    for i in range(3):
        items.append(crawler.crawl('spider2'))
    print items

# Snippet imported from snippets.scrapy.org (which no longer works)
# author: joehillen
# date  : Oct 24, 2010

Thank you.

Has QUIT--Anony-Mousse
  • 76,138
  • 12
  • 138
  • 194
user47954
  • 969
  • 1
  • 7
  • 4
  • 3
    I replaced the inappropriate tag [tag:data-mining] (= advanced data analysis) with [tag:web-scraping]. As to improve your question, make sure it includes: **What did you try?** and **What happened, when you tried**! – Has QUIT--Anony-Mousse Nov 18 '12 at 09:00
  • Those examples are outdated - they won't work with current Scrapy anymore. – Sjaak Trekhaak Nov 19 '12 at 13:14
  • Thanks for the comment. How do you suggest I should do in order to call a spider from within a script? I'm using the latest Scrapy – user47954 Nov 19 '12 at 14:51
  • Cross-referencing [this answer](http://stackoverflow.com/a/27744766/771848) - should give you a detailed overview on how to run Scrapy from a script. – alecxe Jan 03 '15 at 01:39
  • AttributeError: module 'scrapy.log' has no attribute 'start' – PlsWork May 17 '19 at 15:01
  • Also check [this answer](https://stackoverflow.com/a/56517504/2248627) for one file only solution – Levon Jun 10 '19 at 21:29

8 Answers8

96

All other answers reference Scrapy v0.x. According to the updated docs, Scrapy 1.0 demands:

import scrapy
from scrapy.crawler import CrawlerProcess

class MySpider(scrapy.Spider):
    # Your spider definition
    ...

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished
danielmhanover
  • 3,094
  • 4
  • 35
  • 51
  • 3
    I could run this program. I could saw the output from the console. However how could I got it within python? Thanks – Winston Jul 23 '15 at 21:18
  • That is handled within the spider definition – danielmhanover Jul 23 '15 at 21:20
  • Thanks but I need more declaration In traditional way I would write my own spider (similar to the BlogSpider in the official web site) and then run "scrapy crawl myspider.py -o items.json -t json". All the needed data will be saved in a json file for further process. I never got that done within the spider definition. Do you have a link for reference? Thank you very much – Winston Jul 23 '15 at 21:46
  • Like @Winston , I couldn't find online documentation on how to return data from the spider to my python code. Can you clarify what you mean by `handled within the spider definition`? @sddhhanover Thanks – Shadi Mar 12 '16 at 11:30
  • 2
    I ended up using [item loaders](http://doc.scrapy.org/en/latest/topics/loaders.html) and attaching a function to the [item scraped](http://doc.scrapy.org/en/latest/topics/signals.html#item-scraped) signal – Shadi Mar 14 '16 at 13:27
  • How could I pass an argument to MySpider – Akshay Hazari Jul 11 '17 at 05:36
  • 3
    @AkshayHazari the `process.crawl` function will accept keyword arguments and pass them to your spider's `init` – Kwame Mar 12 '18 at 18:56
  • Is Parameter to CrawlerProcess optional? – softmarshmallow Nov 27 '18 at 15:43
  • How can I attach pipeline to it? – Grzegorz Krug Aug 10 '20 at 14:48
19

Simply we can use

from scrapy.crawler import CrawlerProcess
from project.spiders.test_spider import SpiderName

process = CrawlerProcess()
process.crawl(SpiderName, arg1=val1,arg2=val2)
process.start()

Use these arguments inside spider __init__ function with the global scope.

Arun Augustine
  • 1,690
  • 1
  • 13
  • 20
16

Though I haven't tried it I think the answer can be found within the scrapy documentation. To quote directly from it:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log
from testspiders.spiders.followall import FollowAllSpider

spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here

From what I gather this is a new development in the library which renders some of the earlier approaches online (such as that in the question) obsolete.

mrmagooey
  • 4,832
  • 7
  • 37
  • 49
13

In scrapy 0.19.x you should do this:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings

spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent

Note these lines

settings = get_project_settings()
crawler = Crawler(settings)

Without it your spider won't use your settings and will not save the items. Took me a while to figure out why the example in documentation wasn't saving my items. I sent a pull request to fix the doc example.

One more to do so is just call command directly from you script

from scrapy import cmdline
cmdline.execute("scrapy crawl followall".split())  #followall is the spider's name

Copied this answer from my first answer in here: https://stackoverflow.com/a/19060485/1402286

Community
  • 1
  • 1
Igor Medeiros
  • 4,026
  • 2
  • 26
  • 32
11

When there are multiple crawlers need to be run inside one python script, the reactor stop needs to be handled with caution as the reactor can only be stopped once and cannot be restarted.

However, I found while doing my project that using

os.system("scrapy crawl yourspider")

is the easiest. This will save me from handling all sorts of signals especially when I have multiple spiders.

If Performance is a concern, you can use multiprocessing to run your spiders in parallel, something like:

def _crawl(spider_name=None):
    if spider_name:
        os.system('scrapy crawl %s' % spider_name)
    return None

def run_crawler():

    spider_names = ['spider1', 'spider2', 'spider2']

    pool = Pool(processes=len(spider_names))
    pool.map(_crawl, spider_names)
Fengzmg
  • 710
  • 8
  • 9
  • Are all of these spiders within the same project? I was trying to do something similar except with each spider in a different project (since I couldn't get the results to pipeline properly into their own database tables). Since I have to run multiple projects, I can't put the script in any one project. – loremIpsum1771 Jul 28 '15 at 04:49
2

it is an improvement of Scrapy throws an error when run using crawlerprocess

and https://github.com/scrapy/scrapy/issues/1904#issuecomment-205331087

First create your usual spider for successful command line running. it is very very important that it should run and export data or image or file

Once it is over, do just like pasted in my program above spider class definition and below __name __ to invoke settings.

it will get necessary settings which "from scrapy.utils.project import get_project_settings" failed to do which is recommended by many

both above and below portions should be there together. only one don't run. Spider will run in scrapy.cfg folder not any other folder

tree diagram may be displayed by the moderators for reference

#Tree
[enter image description here][1]

#spider.py
import sys
sys.path.append(r'D:\ivana\flow') #folder where scrapy.cfg is located

from scrapy.crawler import CrawlerProcess
from scrapy.settings import Settings
from flow import settings as my_settings

#----------------Typical Spider Program starts here-----------------------------

          spider class definition here

#----------------Typical Spider Program ends here-------------------------------

if __name__ == "__main__":

    crawler_settings = Settings()
    crawler_settings.setmodule(my_settings)

    process = CrawlerProcess(settings=crawler_settings)
    process.crawl(FlowSpider) # it is for class FlowSpider(scrapy.Spider):
    process.start(stop_after_crawl=True)
  • 1
    Add context to imporve quality of answer. Keep in mind 7 more answer were given before your and you want to draw attention to your "superior" solution. Perhaps to get rep as well. End of Review. – ZF007 Oct 22 '20 at 08:40
-3
# -*- coding: utf-8 -*-
import sys
from scrapy.cmdline import execute


def gen_argv(s):
    sys.argv = s.split()


if __name__ == '__main__':
    gen_argv('scrapy crawl abc_spider')
    execute()

Put this code to the path you can run scrapy crawl abc_spider from command line. (Tested with Scrapy==0.24.6)

Kxrr
  • 506
  • 6
  • 14
-6

If you want to run a simple crawling, It's easy by just running command:

scrapy crawl . There is another options to export your results to store in some formats like: Json, xml, csv.

scrapy crawl -o result.csv or result.json or result.xml.

you may want to try it

Doeun
  • 339
  • 3
  • 2