7

I currently have a query that aggregates events over the last hour, and alerts my team if events are over a specific threshold. The query was recently accidentally disabled, and it turns out there were times when the alert should have fired but did not.

My goal is apply this alert query logic to the previous month, and determine how many times the alert would have fired, had it been functional. However, I am having a hard time figuring out how best to group these. In pseudo code I basically I would have (running over a 30 day time frame) :

  index="some_index" | where count > n | group by hour

Hopefully this makes sense, if not, I am happy to provide some clarification.

Thanks in advance

jjohnson8
  • 321
  • 1
  • 3
  • 12

1 Answers1

16

This should get you started:

index=foo | bin span=1h _time | stats count by _time | where count > n
RichG
  • 9,063
  • 2
  • 18
  • 29
  • This is perfect! Works exactly as I'd hoped. Thanks for the help on this. I seem to struggle with Splunk's documentation for some reason. – jjohnson8 Aug 08 '18 at 16:39
  • 2
    If you find the docs unclear, submit feedback (not a comment). The docs staff is good about making changes in response to feedback. – RichG Aug 08 '18 at 22:04