5

I am using logstash, elasticsearch and kibana to analyze my logs. I am alerting via email when a particular string comes into the log via email output in logstash:

email {
        match => [ "Session Detected", "logline,*Session closed*" ]
...........................
}

This works fine.

Now, I want to alert on the count of a field (when a threshold is crossed):

Eg If user is field, I want to alert when number of unique users go more than 5.

Can this be done via email output in logstash??
Please help.

EDIT: As @Alcanzar told I did this:

config file:

    if [server] == "Server2" and [logtype] == "ABClog" {

        grok{
        match => ["message", "%{TIMESTAMP_ISO8601:timestamp} %{HOSTNAME:server-name} abc\[%{INT:id}\]:
 \(%{USERNAME:user}\) CMD \(%{GREEDYDATA:command}\)"]       
        }   

        metrics {
                meter =>  ["%{user}"]
                add_tag =>  "metric"
            }   

        }

So according to above, for server2 and abclog I have a grok pattern for parsing my file and on the user field parsed by grok I want the metric applied.

I did that in the config file as above, but I get strange behaviour when I check logstash console with -vv.

So if there are 9 log lines in the file it parses the 9 first, after that it starts metric part but there the message field is not the logline in the log file but it's the user-name of my PC, thus it gives _grokparsefailure. Something like this:

 output received {
   :event=>{"@version"=>"1", "@timestamp"=>"2014-06-17T10:21:06.980Z", "message"=>"my-pc-name", 
    "root.count"=>2, "root.rate_1m"=>0.0, "root.rate_5m"=>0.0, "root.rate_15m"=>0.0, 
    "abc.count"=>2, "abc.rate_1m"=>0.0, "abc.rate_5m"=>0.0, "abc.rate_15m"=>0.0, "tags"=>["metric", 
    "_grokparsefailure"]}, :level=>:debug, :file=>"(eval)", :line=>"137"
    }

Any help is appreciated.

Roman C
  • 49,761
  • 33
  • 66
  • 176
Siddharth Trikha
  • 2,648
  • 8
  • 57
  • 101

2 Answers2

7

I believe what you need is http://logstash.net/docs/1.4.1/filters/metrics.

You'd want to use a metrics tag to calculate the rate of your event, and then use the thing.rate_1m or thing.rate_5m in an if statement around your email output.

For example:

filter {
  if [message] =~ /whatever_message_you_want/ {
    metrics {
        meter =>  "user"
        add_tag =>  "metric"
    }
  }
}

output {
  if "metric" in [tags] and [user.rate_1m] > 1 {
   email { ... }
  }
}
Alcanzar
  • 16,985
  • 6
  • 42
  • 59
  • The answer above will count the number of users, but I want to alert when number of UNIQUE users (as in question) threshold is crossed. Can that be done?? – Siddharth Trikha Jun 17 '14 at 04:24
  • If you to measure the rate of unique users, you are going to need to create a custom filter for that that conforms to your situation since none of the pre-exsting filters will do exactly what you want. There is a throttle filter that might help you, but it seems like your requirements don't fit what logstash can do out of the box. – Alcanzar Jun 17 '14 at 14:29
  • Ok.. Thanks. Any idea about the issue mentioned in the edit?? – Siddharth Trikha Jun 17 '14 at 16:49
  • if you are talking about the _grokparsefailure, just add an `if "_grokparsefailure" in [tags] { drop {} }` right after the grok – Alcanzar Jun 17 '14 at 18:07
  • I am talking about, the message field which is equal to logline when it parses through grok filter but when it comes to metric filter the message field is equal to my-pc-name (as shown in the question)Thus it gives a grokparsefailure as it does not find user field inside message. Why this behaviour?? – Siddharth Trikha Jun 17 '14 at 18:24
  • you are getting the grokparsefailure because the event doesn't match your pattern... try putting in the drop{} that i mention above and it'll ignore those lines that don't match your grok – Alcanzar Jun 17 '14 at 19:22
0

Aggregating on the logstash side is fairly limited. It also increases the state size thus memory consumption may grow. Alerts that run on the Elasticsearch layer offer more freedom and possibilities.

Logz.io alerts on top of ELK are offered in the below blog: http://logz.io/blog/introducing-alerts-for-elk/

Tomer Levy
  • 357
  • 1
  • 4