1

I want to do a quite involved query/aggregation. I can't see how because I've just started working with ES. The documents I have look something like this:

{
  "keyword": "some keyword",
  "items": [
    {
      "name":"my first item",
      "item_property_1":"A",
      ( other properties here )
    },
    {
      "name":"my second item",
      "item_property_1":"B",
      ( other properties here )
    },
    {
      "name":"my third item",
      "item_property_1":"A",
      ( other properties here )
    }
  ]
  ( other properties... )
},
{
  "keyword": "different keyword",
  "items": [
    {
      "name":"cool item",
      "item_property_1":"A",
      ( other properties here )
    },
    {
      "name":"awesome item",
      "item_property_1":"C",
      ( other properties here )
    },
  ]
  ( other properties... )
},
( other documents... )

Now, what I would like to do is to, for each keyword, count how many items there are for which of the several possible values that property_1 can have. That is, I want a bucket aggregation that would have the following response:

{
  "keyword": "some keyword",
  "item_property_1_aggretation": [
    {
      "key":"A",
      "count": 2,
    },
    {
      "key":"B",
      "count": 1,
    }
  ]
},
{
  "keyword": "different keyword",
  "item_property_1_aggretation": [
    {
      "key":"A",
      "count": 1,
    },
    {
      "key":"C",
      "count": 1,
    }
  ]
},
( other keywords... )

If mappings are necessary, could you also specificy which? I don't have any non-default mappings, I just dumped everything in there.

EDIT: Saving you the trouble by posting here the bulk PUT for the previous example

PUT /test/test/_bulk
{ "index": {}}
{  "keyword": "some keyword",  "items": [    {      "name":"my first item",      "item_property_1":"A"    },    {      "name":"my second item",      "item_property_1":"B"    },    {      "name":"my third item",      "item_property_1":"A"     }  ]}
{ "index": {}}
{  "keyword": "different keyword",  "items": [    {      "name":"cool item",      "item_property_1":"A"    },    {      "name":"awesome item",      "item_property_1":"C"    }  ]}

EDIT2:

I just tried this:

POST /test/test/_search
{
    "size":2,
    "aggregations": {
        "property_1_count": {
            "terms":{
                "field":"item_property_1"
            }
        }
    }
}

and got this:

"aggregations": {
   "property_1_count": {
      "doc_count_error_upper_bound": 0,
      "sum_other_doc_count": 0,
      "buckets": [
         {
            "key": "a",
            "doc_count": 2
         },
         {
            "key": "b",
            "doc_count": 1
         },
         {
            "key": "c",
            "doc_count": 1
         }
      ]
   }
}

close but no cigar. You can see what's happening, it's bucketing over each item_property_1 irrespectively of the keyword it belongs to. I'm sure the solution involves adding some mapping correctly, but I can't put my finger on it. Suggestions?

EDIT3: Based on this: https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-nested-type.html I want to try adding a nested type to property items. To do that, I tried:

PUT /test/_mapping/test
{
    "test":{
        "properties": {
            "items": {
                "type": "nested",
                "properties": {
                    "item_property_1":{"type":"string"}
                }
            }
        }
    }
}

However, this returns an error:

{
   "error": "MergeMappingException[Merge failed with failures {[object mapping [items] can't be changed from non-nested to nested]}]",
   "status": 400
}

This might have to do with the warning on that url: "changing an object type to nested type requires reindexing."

So, how do I do that?

1 Answers1

3

Nice tries, you were almost there! Here is what I came up with. Based on your mapping proposal, the mapping I'm using is the following:

curl -XPUT localhost:9200/test/_mapping/test -d '{
  "test": {
    "properties": {
      "keyword": {
        "type": "string",
        "index": "not_analyzed"
      },
      "items": {
        "type": "nested",
        "properties": {
          "name": {
            "type": "string"
          },
          "item_property_1": {
            "type": "string",
            "index": "not_analyzed"
          }
        }
      }
    }
  }
}'

Note: you need to wipe and reindex your data, since you cannot change a field type from being not nested to nested.

Then I created some data with the bulk query you shared:

curl -XPOST localhost:9200/test/test/_bulk -d '
{ "index": {}}
{  "keyword": "some keyword",  "items": [    {      "name":"my first item",      "item_property_1":"A"    },    {      "name":"my second item",      "item_property_1":"B"    },    {      "name":"my third item",      "item_property_1":"A"     }  ]}
{ "index": {}}
{  "keyword": "different keyword",  "items": [    {      "name":"cool item",      "item_property_1":"A"    },    {      "name":"awesome item",      "item_property_1":"C"    }  ]}
'

Finally, here is the aggregation query you can use to get the results you expect. We first bucket by keyword using a terms aggregation and then for each keyword, we bucket by the nested item_property_1 field. Since items is now a nested type, the key is to use a nested aggregation for items and then a terms sub-aggregation for the item_property_1 field.

{
  "size": 0,
  "aggregations": {
    "by_keyword": {
      "terms": {
        "field": "keyword"
      },
      "aggs": {
        "prop_1_count": {
          "nested": {
            "path": "items"
          },
          "aggs": {
            "prop_1": {
              "terms": {
                "field": "items.item_property_1"
              }
            }
          }
        }
      }
    }
  }
}

Running that query on your data set will yield this:

{
  ...
  "aggregations" : {
    "by_keyword" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [ {
        "key" : "different keyword",       <---- keyword 1
        "doc_count" : 1,
        "prop_1_count" : {
          "doc_count" : 2,
          "prop_1" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [ {                <---- buckets for item_property_1
              "key" : "A",
              "doc_count" : 1
            }, {
              "key" : "C",
              "doc_count" : 1
            } ]
          }
        }
      }, {
        "key" : "some keyword",            <---- keyword 2
        "doc_count" : 1,
        "prop_1_count" : {
          "doc_count" : 3,
          "prop_1" : {
            "doc_count_error_upper_bound" : 0,
            "sum_other_doc_count" : 0,
            "buckets" : [ {                <---- buckets for item_property_1
              "key" : "A",
              "doc_count" : 2
            }, {
              "key" : "B",
              "doc_count" : 1
            } ]
          }
        }
      } ]
    }
  }
}
Val
  • 207,596
  • 13
  • 358
  • 360
  • That's exactly, what I was looking for, thank you. Also, good idea to have `item_property_1` index as "not analyzed" :-) Just one last thing. The actual index in my db already has 7k large documents that I'd rather not collect again. I can create a new index with the new mapping, but how can I then copy the old index into the new one? –  Aug 06 '15 at 10:29
  • See [this question](http://stackoverflow.com/questions/31853262/how-to-move-data-from-one-elasticsearch-index-to-another-using-the-bulk-api/31853454#31853454), I just answered it ;-) – Val Aug 06 '15 at 10:33
  • Ah alright, if I have to install logstash to write scripts, I'd rather not learn a new tool which I won't use any time soon and instead write the scripts in Python :-) Thanks anyway mate –  Aug 06 '15 at 10:57
  • I was mentioning Logstash because it's a very useful tool for many different use cases. It provides all the boilerplate code for the input/filter/output pattern that you never want to have to write again because your valuable time is better spent on high added-value application logic than such boring tasks. Just my 2 cents – Val Aug 06 '15 at 11:14
  • Hey Val another question. If `item` is not placed on the root, but is itself an array element of another key, several levels deep, do you have to have each level as `"type":"nested"` or is it enough to have the innermost key with that type? –  Aug 06 '15 at 16:37
  • You should ask another question and maybe reference this one for giving some the context, as the answer could be helpful to others, too. – Val Aug 06 '15 at 16:54
  • Here it is: http://stackoverflow.com/questions/31864722/elasticsearch-aggregation-of-deep-nested-type –  Aug 06 '15 at 21:06