2

I have a single project with the web interface where I should download from 3000 to 20000 urls per week (month). I use tickets system for showing a progress: what is downloaded, what is pending for downloading, which urls have timeout errors and similar issues. Now I'm using ScrapyD for it, but planning to switch to ScrapyRT. Because it's looking easier to run single URL and getting a result after it - updates ticket status. My question is how many independent requests can receive ScrapyRT? I tried near 50-100 request for scraping async and server just stop work.

Or there exist some another way how to do it? Scrapy cluster or Frontera is not for me

Gallaecio
  • 3,620
  • 2
  • 25
  • 64
amarynets
  • 1,765
  • 10
  • 27

0 Answers0