I'm using Scrapy to do some crawling with Splash using the Scrapinghub/splash docker container however the container exit after a while by itself with exit code 139, I'm running the scraper on an AWS EC2 instance with 1GB swap assigned.
i also tried to run it in background and view the logs later but nothing indicates an error it just exit.
From what i understand 139 is for Segmentation Fault errors in UNIX, is there anyway to check or log what part of memory being accessed or code being executed to debug this?
Or can i increase the container memory or swap size to avoid this?