I use Scrapy FakeUserAgent and keep getting this error on my Linux Server.
Traceback (most recent call last):
File "/usr/local/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 37, in process_request
response = yield method(request=request, spider=spider)
File "/usr/local/lib/python2.7/site-packages/scrapy_fake_useragent/middleware.py", line 27, in process_request
request.headers.setdefault('User-Agent', self.ua.random)
File "/usr/local/lib/python2.7/site-packages/fake_useragent/fake.py", line 98, in __getattr__
raise FakeUserAgentError('Error occurred during getting browser') # noqa
FakeUserAgentError: Error occurred during getting browser
I keep getting this error on the Linux server when I run multiple spiders concurrently. This error rarely happens on my own laptop. What should I do to avoid that? Do I have to raise the RAM or something? The server's spec is 512MB RAM and 1 vCPU.