I have a webapp with a Rails API running in a Puma server with a MySQL database. I´m running a batch process that is writing a lot of logs to a log file. In a specific moment after some time running the Puma server goes down. In the logs everythink looks fine until aparently with no reason it gets down. But I don´t see any error. Neither in the puma.err. Not sure if it´s related to the server or maybe the database (something related with the pool?). I´m blind. I don´t know where to look at to debug the problem.
UPDATE: I think I have narrowed down the problem. I have realised that the Puma server gets down always when trying to load the same element. I tried to make a test in development and it works there. So, I think I know what is killing it. I´m not sure but strongly suspect the element I´m trying to load in my batch process run a lot of queries within a single transaction in the MySQL database. I think somehow the transaction is not able to process all the queries and for some reason the Puma server gets down. I don´t know if this makes sense, but this is my main suspect. I have read something about transaction sizes and log files How do I determine maximum transaction size in MySQL?
My new question is: if this is really happening, can I see an error related to this in any log file (Puma or Mysql)?
UPDATE 2: I attach enviroment information:
DEVELOPMENT: MacOS, Processor: 2,4GHz, RAM: 8GB
PRODUCTION: Ubuntu14, Processor: 2,5GHz, RAM: 3,7GB (AWS instance) Regarding the Puma configuration, I´m not very skilled but I´m starting the server with a config file with just two lines in both development and configuration environments:
stdout_redirect 'path_to_log_file.log', 'path_to_error_file.log', true
bind 'unix:///tmp/puma.sock'
When starting up I neither see and workers configuration, so I assume I have only one worker and the default number of threads 0 to 16.