Suppose I have a JSON document (sent from packetbeat in this case) containing some structure like this:
{
"source": "http://some/url/",
"items": [
{"name":"item1", "value":1},
{"name":"item2", "value":2}
]
}
How can I have Elastic Search index these as separate documents, such that I can retrieve them like this:
GET http://elasicsearch:9200/indexname/doc/item1
{
"source": "http://some/url/",
"item": {
"name":"item1",
"value":1
}
}
GET http://elasicsearch:9200/indexname/doc/item2
{
"source": "http://some/url/",
"item": {
"name":"item2",
"value":2
}
}
Can an injest pipeline, using painless or some other means, achieve this? (perhaps reindexing??)
(The data come from Packetbeat, which is efficient for the large volumes involved, and consist of arrays of similar items, more complex that the example above. I'm not using Logstash, and would rather avoid it for simplicity, but if it's necessary I can add it. Obviously I could split the document with a programming language before sending it, but if possible I'd like to do this within the Elastic Stack, to minimise additional dependencies.)