I am crawling one website and parsing some content+images however even for simple site with 100 pages or so it is taking hours to do the job. I am using following settings. Any help would be highly appreciated. I have already seen this question - Scrapy 's Scrapyd too slow with scheduling spiders but couldn't gather much insight.
EXTENSIONS = {'scrapy.contrib.logstats.LogStats': 1}
LOGSTATS_INTERVAL = 60.0
RETRY_TIMES = 4
CONCURRENT_REQUESTS = 32
CONCURRENT_REQUESTS_PER_DOMAIN = 12
CONCURRENT_ITEMS = 200
DOWNLOAD_DELAY = 0.75