0

I have each event as a JSON object below which is indexed by Splunk. How can I have a Splunk query such that I find all such failures which happen to be present in both "failed" and "passed" arrays?

"output":{
          "date" : "21-09-2017"
          "failed": [ "fail_1", **"fail_2"** ],
          "passed": [ "pass_1", "pass_2" , **"fail_2"**]
}

For the above example, the result would be "fail_2".

freginold
  • 3,946
  • 3
  • 13
  • 28
Zack
  • 2,078
  • 10
  • 33
  • 58

1 Answers1

0

You can do something like:

| makeresults
| eval x = "{\"output\":{\"date\" : \"21-09-2017\",\"failed\": [ \"fail_1\", \"fail_2\"],\"passed\": [ \"pass_1\", \"pass_2\" , \"fail_2\"]}}"
| eval x = mvappend(x,"{\"output\":{\"date\" : \"21-09-2017\",\"failed\": [ \"f_1\", \"f_2\"],\"passed\": [ \"f_1\", \"pass_2\" , \"f_2\"]}}")
| mvexpand x
| streamstats count as id 
| spath input=x
| rename "output.failed{}" as failed, "output.passed{}" as passed, "output.date" as date
| mvexpand failed
| eval common_field = if(isnotnull(mvfind(passed, failed)),failed,null)
| stats values(date) as date, values(failed) as failed, values(passed) as passed, values(common_field) as common_field by id

The example contains 2 sample log events where failed and passed have common values. streamstats is then used to assign a unique id to each event, as I did not see a unique id in your sample. spath parses the json object into fields. Once that is done, mvexpand creates one row for each value of failed. mvfind then is used to find the values of the failed field that match with any of the values of the passed field. The related rows are then combined again using the unique id assigned.

pjnike
  • 181
  • 6