Scrapyd is a great tool for managing Scrapy processes. But the best answer I can give is that it depends. First you need to figure out where your bottleneck is.
If it is CPU intensive parsing, you should use multiple processes. Scrapy is able to handle 1000s of requests in parallel through Twisted's implementation of the Reactor pattern. But it uses only one process and no multi-threading and so it will utilize only a single core.
If it is just the number of requests that is limiting speed, tweak concurrent requests. Test your internet speed. To test how much bandwidth you have Then, go to your network resources in your system monitor, run your spider and see how much bandwidth you use compared to your max. Increase your concurrent requests until you stop seeing performance increases. The stop point could be determined by the site capacity, though only for small sites, the sites anti-scraping/DDoS programs (assuming you don't have proxies or vpns), your bandwidth or another chokepoint in the system.
The last thing to know is that, while requests are handled in an async manner, items are not. If you have a lot of text and write everything locally, it will block requests while it writes. You will see lulls on the system monitor network panel. You can tweak your concurrent items and maybe get a smoother network usage, but it will still take the same amount of time. If you are using db writes, consider an insert delayed, or a queue with an execute many after a threshold, or both. Here is a pipeline someone wrote to handle all db writes async.
The last choke point could be memory. I have run into this issue on an AWS micro instance, though on a laptop, it probably isn't an issue. If you don't need them, considering disabling the cache, cookies, and dupefilter. Of course they can be very helpful. Concurrent Items and Requests also take up memory.