There is a problem I struggle with for a while:
I have data in a json format:
"documentsList": [{
"commandScn": "108058599",
"commandCommitScn": "108058600",
"commandSequence": "0",
"commandType": "UPDATE",
"commandTimestamp": "2017-08-22 14:37:53+03:000",
"objectDBName": "DEV2",
"objectSchemaName": "YISHAIN",
"objectId": "CUSTOMERS",
"changedFieldsList": [{
"fieldId": "CUSTOMER_ID",
"fieldType": "NUMBER",
"fieldValue": "17",
"fieldChanged": "N"
}, {
"fieldId": "CUSTOMER_FIRST_NAME",
"fieldType": "VARCHAR2",
"fieldValue": "Daniel",
"fieldChanged": "N"
}, {
"fieldId": "CUSTOMER_LAST_NAME",
"fieldType": "VARCHAR2",
"fieldValue": "Washington",
"fieldChanged": "N"
}, {
"fieldId": "CUSTOMER_COUNTRY",
"fieldType": "VARCHAR2",
"fieldValue": "France",
"fieldChanged": "N"
}, {
"fieldId": "CUSTOMER_CITY",
"fieldType": "VARCHAR2",
"fieldValue": "La Roche-sur-Yon",
"fieldChanged": "N"
}
}]
}]
What I want is to fetch the values of the fields: fieldId, fieldType, fieldValue, fieldChanged etc. I want to aggregate by these terms afterwards in kibana 5.6.
My questions are:
How can I tell grok to fetch everything after a word (because I want the values of the keys, not the leys themselves)? Does elasticsearch support lookbehind? if so - can anyone give me an example of how to accomplish this?
If elastic's regex doesn't support it, how can I get this done? If I need to use regex, what will be the why to get the values? What if I had to do that without lookbehind and these methods (javascript regex)?
Thanks!
EDIT: This is not like this question: How to parse json in logstash /grok from a text file line? because I want to use regex and not json filters.