6

I'm using Scrapy to do some crawling with Splash using the Scrapinghub/splash docker container however the container exit after a while by itself with exit code 139, I'm running the scraper on an AWS EC2 instance with 1GB swap assigned.

i also tried to run it in background and view the logs later but nothing indicates an error it just exit.

From what i understand 139 is for Segmentation Fault errors in UNIX, is there anyway to check or log what part of memory being accessed or code being executed to debug this?

Or can i increase the container memory or swap size to avoid this?

Gallaecio
  • 3,620
  • 2
  • 25
  • 64
MtziSam
  • 130
  • 11
  • β€œOr can i increase the container memory or swap size to avoid this?” Did you try? – Gallaecio Nov 20 '19 at 08:50
  • 1
    @Gallaecio As far as I remember I didn't find a solution to this at the time and ended up using a bigger EC2 instance with more memory & storage and optimized my Scrapy spiders, what I found recently but didn't try yet is that you can assign memory size when you run your docker container, also you can almost adjust everything easily from docker gui if applicable, see here https://stackoverflow.com/a/53743328/6114068 – MtziSam Nov 22 '19 at 13:34

0 Answers0