simple question here with maybe a complex answer? I have several logstash docker containers running on the same host using the JDBC plugin. Each of them does work every minute. For example:
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/bin/mysql-connector-java-8.0.15.jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
# useCursorFetch needed cause jdbc_fetch_size not working??
# https://discuss.elastic.co/t/logstash-jdbc-plugin/84874/2
# https://stackoverflow.com/a/10772407
jdbc_connection_string => "jdbc:mysql://${CP_LS_SQL_HOST}:${CP_LS_SQL_PORT}/${CP_LS_SQL_DB}?useCursorFetch=true&autoReconnect=true&failOverReadOnly=false&maxReconnects=10"
statement => "select * from view_elastic_popularity_scores_all where updated_at > :sql_last_value"
jdbc_user => "${CP_LS_SQL_USER}"
jdbc_password => "${CP_LS_SQL_PASSWORD}"
jdbc_fetch_size => "${CP_LS_FETCH_SIZE}"
last_run_metadata_path => "/usr/share/logstash/codepen/last_run_files/last_run_popularity_scores_live"
jdbc_page_size => '10000'
use_column_value => true
tracking_column => 'updated_at'
tracking_column_type => 'timestamp'
schedule => "* * * * *"
}
}
Notice the schedule is * * * * *
? That's the crux. I have a box that's generally idle for 50 seconds out of every minute, then it's working its ass off for x seconds to process data for all 10 logstash containers. What'd be amazing is if I could find a way to splay the time so that the 10 containers work on independent schedules, offset by x seconds.
Is this just a dream? Like world peace, or time away from my kids?
Thanks