3

Right now only the following are logged to logs/delayed_job.log:

2012-04-20T03:57:44+0000: Cacher completed after 5.3676
2012-04-20T03:57:44+0000: 1 jobs processed at 0.1744 j/s, 0 failed ...

What I'm trying to do is to have it also log all my puts as well as my SQL queries, like it does in development mode, except it still gets logged to log/delayed_job.log.

I tried to add the following to initializers/delayed_job_config.rb but no luck:

Delayed::Worker.logger = Rails.logger
Delayed::Worker.logger.level = Logger::DEBUG

(all my puts get logged now, but no more job status logs and still no SQL queries)

samvermette
  • 40,269
  • 27
  • 112
  • 144

4 Answers4

8

In Rails v3.2.13 you can do:

Delayed::Worker.logger = ActiveSupport::Logger.new("log/delayed_job.log", Rails.logger.level)
Delayed::Worker.logger.auto_flushing = 1

#only reassign AR logger if we're in a delayed process(rake task or script/delayed_job)
if caller.last =~ /script\/delayed_job/ or (File.basename($0) == "rake" and ARGV[0] =~ /jobs\:work/)
  ActiveRecord::Base.logger = Delayed::Worker.logger
end

But...ActiveSupport::BufferedLogger is being deprecated https://github.com/rails/rails/blob/master/activesupport/lib/active_support/buffered_logger.rb#L18

Plus, you probably want to rotate your logs right? So this is better I think:

Delayed::Worker.logger = Logger.new("log/delayed_job.log", 5, 104857600)

if caller.last =~ /script\/delayed_job/ or (File.basename($0) == "rake" and ARGV[0] =~ /jobs\:work/)
  ActiveRecord::Base.logger = Delayed::Worker.logger
end

The constructor says keep around 5 log files and rotate when the current log file hits 100Mb. See here for details: http://www.ruby-doc.org/stdlib-1.9.3/libdoc/logger/rdoc/Logger.html#method-c-new

Credit to: http://www.heedspin.com/2011/01/17/delayed-job-logging.html for the inspiration

Peter P.
  • 3,221
  • 2
  • 25
  • 31
3

maybe this link helpful to you: https://stackoverflow.com/a/7223681/445908

UPDATED:a bit more about puts. v.s logger:

I am not sure what your implementation of making "puts" result stored into log file. however I think it could be a much better choice to ALWAYS use "logger" instead of "puts". the reasons are:

  1. in most of the cases, puts can slow down your performance(especially in production environment, in all of my past projects, puts is not allowed to appear in the source code) ( for java programmer, System.out.println is the same stuff)

  2. always use logger, since it has 5 default levels: debug, info, warn, error, fatal, which could be control and beautifully formatted and filtered by tools such as grep. e.g. can puts produce the log info below?

    07:27:48 INFO: in device_resource, options: queryem??ltres67@blur.com 
    07:27:48 INFO: cloudset: ??100 
    
Community
  • 1
  • 1
Siwei
  • 19,858
  • 7
  • 75
  • 95
  • Have tried all 3 solutions and none of them works. In all cases `log/delayed_job.log` only gets the job status messages, and `development.log` gets the sql queries, but no where are my `puts` to be seen. – samvermette Apr 20 '12 at 05:39
  • so how do you make your "puts" logged ? maybe you should let delayed_job know how that works. – Siwei Apr 20 '12 at 06:05
  • so use something like `logger.info` instead? – samvermette Apr 20 '12 at 06:06
  • Thanks for clarifying `puts` vs `logger`. Ended up using [this code](http://stackoverflow.com/a/4648531/87158) to output my job status messages, SQL queries as well as my custom log messages to a dedicated delayed_job.log file. – samvermette Apr 23 '12 at 04:22
3

This may be a bit late for the OP, but here's a different approach: use a DelayedJob plugin to set the Rails (and ActiveRecord::Base) loggers from the one assigned to the Worker.

Two steps: in your Delayed Job initializer, do:

require 'delayed_job_log_setup'
Delayed::Worker.plugins << DelayedJobLogSetup

The plugin itself (which I obviously put in lib/delayed_job_log_setup.rb):

require 'delayed_job'

class DelayedJobLogSetup < Delayed::Plugin
  callbacks do |lifecycle|
    lifecycle.before(:execute) do |worker|
      Rails.logger = worker.logger
      ActiveRecord::Base.logger = worker.logger
    end
  end
end

Unfortunately, my attempts to monkey-patch Object#puts to call the logger directly (inside the plugin) still doesn't do what the OP wanted.

cobbr2
  • 131
  • 1
  • 2
  • Excellent answer. This works correctly for jobs that are classes as well as jobs using the simple .delay. syntax. – dodgio Oct 28 '14 at 02:56
0

Another way to do this is to do the necessary log file configuration in the script/delayed_job file in your rails application. Mine now looks like this,

#!/usr/bin/env ruby

require File.expand_path(File.join(File.dirname(__FILE__), '..', 'config', 'environment'))
require 'delayed/command'

# This was added to make sure the DJ log has the same log level 
# as the rails app, and to ensure ActiveRecord uses the same log 
# when loaded inside a DJ worker process.
#
Delayed::Worker.logger ||= Logger.new(File.join(Rails.root, 'log', 'delayed_job.log'), Rails.configuration.log_level)
ActiveRecord::Base.logger = Delayed::Worker.logger

Delayed::Command.new(ARGV).daemonize

Its pretty simple and clear what is going on with this approach.

And more importantly, there is no risk of messing up the AR logger for scenarios that don't involve a delayed worker process. This happened to us with a different approach similar to some of the other answers and caused numerous headaches and confusion.

Mark Berry
  • 17,843
  • 4
  • 58
  • 88
Shyam Habarakada
  • 15,367
  • 3
  • 36
  • 47
  • For some reason, with Rails 3.2.16 under AWS Elastic Beanstalk on Passenger Standalone, this created a new, empty, numbered log file containing one SQL query every five seconds (each time the daemon runs). Looks like it went up to 100 files before stopping or maybe starting over. Also, it looks log_level cannot be set in the 'Logger.new` method. – Mark Berry Apr 07 '15 at 02:10
  • @MarkBerry it sounds like you have some kind of log rotation component running outside of this code? I am unfamiliar with AWS Elastic Beanstalk, so cannot really comment about what it might introduce. – Shyam Habarakada Apr 14 '15 at 01:49
  • I don't think this is related to log rotation, as rotated logs go into a subdirectory. It's like there was something causing it to create a new file (every five seconds) rather than overwriting an old one. I'm thinking it has to do with having this code in the script file rather than an initializer. Are you using this code in production? I've had better luck going back to setting the log in an initializer. – Mark Berry Apr 14 '15 at 01:59