0

I want to parse AWS ELB logs [stored in a S3 bucket] from Logstash that is set up inside a dockerised ELK stack.

I cloned this repo. Here are it's docs.

I added my logstash config file like this [and commented out all the others]:

# AWS ELB configuration file ADD ./aws_elb_logs.conf /etc/logstash/conf.d/aws_elb_logs.conf

The config file is the following:

input {
    s3 {
        # Logging_user AWS creds
        access_key_id     => "fjnsdfjnsdjfnjsdn"
        secret_access_key => "asdfsdfsdfsdfsdfsdfsdfsd"

        bucket            => "elb-access-logs"

        region            => "us-west-2"

        # keep track of the last processed file
        sincedb_path    => "./last-s3-file"
        codec           => "json"
        type            => "elb"
    }
}

filter {
    if [type] == "elb" {
        grok {
            match => [ 'message', '%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:loadbalancer} %{IP:client_ip}:%{NUMBER:client_port:int} (?:%{IP:backend_ip}:%{NUMBER:backend_port:int}|-) %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} (?:%{NUMBER:elb_status_code:int}|-) (?:%{NUMBER:backend_status_code:int}|-) %{NUMBER:received_bytes:int} %{NUMBER:sent_bytes:int} "(?:%{WORD:verb}|-) (?:%{GREEDYDATA:request}|-) (?:HTTP/%{NUMBER:httpversion}|-( )?)" "%{DATA:userAgent}"( %{NOTSPACE:ssl_cipher} %{NOTSPACE:ssl_protocol})?' ]
        }
        grok {
            match => [ "request", "%{URIPROTO:http_protocol}" ]
        }
        geoip {
            source  => "client_ip"
            target  => "geoip"
            add_tag => [ "geoip" ]
        }
        useragent {
            source => "userAgent"
        }
        date {
            match => ["timestamp", "ISO8601"]
        }
    }
}

output {
    elasticsearch {
        hosts => localhost
        port => "9200"
        index => "logstash-%{+YYYY.MM.dd}"
    }
    stdout {
      debug => true
   }
}

When I create the container, I get the following error log from Logstash:

==> /var/log/logstash/logstash.log <==
{:timestamp=>"2016-10-18T13:04:40.798000+0000", :message=>"Pipeline aborted due to error", :exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:88:in `config_init'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/config/mixin.rb:72:in `config_init'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/outputs/base.rb:79:in `initialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/output_delegator.rb:74:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:181:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
{:timestamp=>"2016-10-18T13:04:43.801000+0000", :message=>"stopping pipeline", :id=>"main"}

I cannot understand what is/am going/doing wrong!

Any pointers would be welcome..

EDIT:

Now there is this: ==> /var/log/logstash/logstash.log <== {:timestamp=>"2016-10-18T14:26:50.492000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::S3 access_key_id=>\"gsfgdfgdfgdfgdfg\", secret_access_key=>\"dsfgsdfgsdgsdfgsdfg\", bucket=>\"elb-access-logs-dr\", region=>\"us-west-2\", sincedb_path=>\"./last-s3-file\", codec=><LogStash::Codecs::JSON charset=>\"UTF-8\">, type=>\"elb\", use_ssl=>true, delete=>false, interval=>60, temporary_directory=>\"/opt/logstash/logstash\">\n Error: The request signature we calculated does not match the signature you provided. Check your key and signing method.", :level=>:error}

Kostas Demiris
  • 3,415
  • 8
  • 47
  • 85

1 Answers1

1

If you are using one of the container with logstash version > 2, your configuration for the elasticsearch output plugin is where the error is coming from. With logstash version 2, the configuration option port was removed, the port being configured with the host in the hosts option (cf doc).

baudsp
  • 4,076
  • 1
  • 17
  • 35
  • Thank you , that made some progress! Now , I have another one :\. Please check the edit above – Kostas Demiris Oct 18 '16 at 14:28
  • @KostasDemiris I've taken a look to the edit. I have zero experience with s3 buckets and the s3 plugin, so I can only say what's in the log you provided, that there is a problem with your credentials. If you want help on this problem, you'll have to ask another question – baudsp Oct 18 '16 at 14:39
  • It,s OK. I have found it. For reference http://stackoverflow.com/a/20720402/1163667 . Thank you again. it was such a sneaky one ;] – Kostas Demiris Oct 18 '16 at 14:44
  • @KostasDemiris Congrats! I sure would not have been able to help on that problem – baudsp Oct 18 '16 at 14:46