32

I want to use ElasticSearch to search filenames (not the file's content). Therefore I need to find a part of the filename (exact match, no fuzzy search).

Example:
I have files with the following names:

My_first_file_created_at_2012.01.13.doc
My_second_file_created_at_2012.01.13.pdf
Another file.txt
And_again_another_file.docx
foo.bar.txt

Now I want to search for 2012.01.13 to get the first two files.
A search for file or ile should return all filenames except the last one.

How can i accomplish that with ElasticSearch?

This is what I have tested, but it always returns zero results:

curl -X DELETE localhost:9200/files
curl -X PUT    localhost:9200/files -d '
{
  "settings" : {
    "index" : {
      "analysis" : {
        "analyzer" : {
          "filename_analyzer" : {
            "type" : "custom",
            "tokenizer" : "lowercase",
            "filter"    : ["filename_stop", "filename_ngram"]
          }
        },
        "filter" : {
          "filename_stop" : {
            "type" : "stop",
            "stopwords" : ["doc", "pdf", "docx"]
          },
          "filename_ngram" : {
            "type" : "nGram",
            "min_gram" : 3,
            "max_gram" : 255
          }
        }
      }
    }
  },

  "mappings": {
    "files": {
      "properties": {
        "filename": {
          "type": "string",
          "analyzer": "filename_analyzer"
        }
      }
    }
  }
}
'

curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "My_first_file_created_at_2012.01.13.doc" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "My_second_file_created_at_2012.01.13.pdf" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "Another file.txt" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "And_again_another_file.docx" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "foo.bar.txt" }'
curl -X POST "http://localhost:9200/files/_refresh"


FILES='
http://localhost:9200/files/_search?q=filename:2012.01.13
'

for file in ${FILES}
do
  echo; echo; echo ">>> ${file}"
  curl "${file}&pretty=true"
done
user
  • 17,781
  • 20
  • 98
  • 124
Biggie
  • 7,037
  • 10
  • 33
  • 42

3 Answers3

150

You have various problems with what you pasted:

1) Incorrect mapping

When creating the index, you specify:

"mappings": {
    "files": {

But your type is actually file, not files. If you checked the mapping, you would see that immediately:

curl -XGET 'http://127.0.0.1:9200/files/_mapping?pretty=1' 

# {
#    "files" : {
#       "files" : {
#          "properties" : {
#             "filename" : {
#                "type" : "string",
#                "analyzer" : "filename_analyzer"
#             }
#          }
#       },
#       "file" : {
#          "properties" : {
#             "filename" : {
#                "type" : "string"
#             }
#          }
#       }
#    }
# }

2) Incorrect analyzer definition

You have specified the lowercase tokenizer but that removes anything that isn't a letter, (see docs), so your numbers are being completely removed.

You can check this with the analyze API:

curl -XGET 'http://127.0.0.1:9200/_analyze?pretty=1&text=My_file_2012.01.13.doc&tokenizer=lowercase' 

# {
#    "tokens" : [
#       {
#          "end_offset" : 2,
#          "position" : 1,
#          "start_offset" : 0,
#          "type" : "word",
#          "token" : "my"
#       },
#       {
#          "end_offset" : 7,
#          "position" : 2,
#          "start_offset" : 3,
#          "type" : "word",
#          "token" : "file"
#       },
#       {
#          "end_offset" : 22,
#          "position" : 3,
#          "start_offset" : 19,
#          "type" : "word",
#          "token" : "doc"
#       }
#    ]
# }

3) Ngrams on search

You include your ngram token filter in both the index analyzer and the search analyzer. That's fine for the index analyzer, because you want the ngrams to be indexed. But when you search, you want to search on the full string, not on each ngram.

For instance, if you index "abcd" with ngrams of length 1 to 4, you will end up with these tokens:

a b c d ab bc cd abc bcd

But if you search on "dcba" (which shouldn't match) and you also analyze your search terms with ngrams, then you are actually searching on:

d c b a dc cb ba dbc cba

So a,b,c and d will match!

Solution

First, you need to choose the right analyzer. Your users will probably search for words, numbers or dates, but they probably won't expect ile to match file. Instead, it will probably be more useful to use edge ngrams, which will anchor the ngram to the start (or end) of each word.

Also, why exclude docx etc? Surely a user may well want to search on the file type?

So lets break up each filename into smaller tokens by removing anything that isn't a letter or a number (using the pattern tokenizer):

My_first_file_2012.01.13.doc
=> my first file 2012 01 13 doc

Then for the index analyzer, we'll also use edge ngrams on each of those tokens:

my     => m my
first  => f fi fir firs first
file   => f fi fil file
2012   => 2 20 201 201
01     => 0 01
13     => 1 13
doc    => d do doc

We create the index as follows:

curl -XPUT 'http://127.0.0.1:9200/files/?pretty=1'  -d '
{
   "settings" : {
      "analysis" : {
         "analyzer" : {
            "filename_search" : {
               "tokenizer" : "filename",
               "filter" : ["lowercase"]
            },
            "filename_index" : {
               "tokenizer" : "filename",
               "filter" : ["lowercase","edge_ngram"]
            }
         },
         "tokenizer" : {
            "filename" : {
               "pattern" : "[^\\p{L}\\d]+",
               "type" : "pattern"
            }
         },
         "filter" : {
            "edge_ngram" : {
               "side" : "front",
               "max_gram" : 20,
               "min_gram" : 1,
               "type" : "edgeNGram"
            }
         }
      }
   },
   "mappings" : {
      "file" : {
         "properties" : {
            "filename" : {
               "type" : "string",
               "search_analyzer" : "filename_search",
               "index_analyzer" : "filename_index"
            }
         }
      }
   }
}
'

Now, test that the our analyzers are working correctly:

filename_search:

curl -XGET 'http://127.0.0.1:9200/files/_analyze?pretty=1&text=My_first_file_2012.01.13.doc&analyzer=filename_search' 
[results snipped]
"token" : "my"
"token" : "first"
"token" : "file"
"token" : "2012"
"token" : "01"
"token" : "13"
"token" : "doc"

filename_index:

curl -XGET 'http://127.0.0.1:9200/files/_analyze?pretty=1&text=My_first_file_2012.01.13.doc&analyzer=filename_index' 
"token" : "m"
"token" : "my"
"token" : "f"
"token" : "fi"
"token" : "fir"
"token" : "firs"
"token" : "first"
"token" : "f"
"token" : "fi"
"token" : "fil"
"token" : "file"
"token" : "2"
"token" : "20"
"token" : "201"
"token" : "2012"
"token" : "0"
"token" : "01"
"token" : "1"
"token" : "13"
"token" : "d"
"token" : "do"
"token" : "doc"

OK - seems to be working correctly. So let's add some docs:

curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "My_first_file_created_at_2012.01.13.doc" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "My_second_file_created_at_2012.01.13.pdf" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "Another file.txt" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "And_again_another_file.docx" }'
curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "foo.bar.txt" }'
curl -X POST "http://localhost:9200/files/_refresh"

And try a search:

curl -XGET 'http://127.0.0.1:9200/files/file/_search?pretty=1'  -d '
{
   "query" : {
      "text" : {
         "filename" : "2012.01"
      }
   }
}
'

# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "filename" : "My_second_file_created_at_2012.01.13.pdf"
#             },
#             "_score" : 0.06780553,
#             "_index" : "files",
#             "_id" : "PsDvfFCkT4yvJnlguxJrrQ",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_first_file_created_at_2012.01.13.doc"
#             },
#             "_score" : 0.06780553,
#             "_index" : "files",
#             "_id" : "ER5RmyhATg-Eu92XNGRu-w",
#             "_type" : "file"
#          }
#       ],
#       "max_score" : 0.06780553,
#       "total" : 2
#    },
#    "timed_out" : false,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 5,
#       "total" : 5
#    },
#    "took" : 4
# }

Success!

#### UPDATE ####

I realised that a search for 2012.01 would match both 2012.01.12 and 2012.12.01 so I tried changing the query to use a text phrase query instead. However, this didn't work. It turns out that the edge ngram filter increments the position count for each ngram (while I would have thought that the position of each ngram would be the same as for the start of the word).

The issue mentioned in point (3) above is only a problem when using a query_string, field, or text query which tries to match ANY token. However, for a text_phrase query, it tries to match ALL of the tokens, and in the correct order.

To demonstrate the issue, index another doc with a different date:

curl -X POST "http://localhost:9200/files/file" -d '{ "filename" : "My_third_file_created_at_2012.12.01.doc" }'
curl -X POST "http://localhost:9200/files/_refresh"

And do a the same search as above:

curl -XGET 'http://127.0.0.1:9200/files/file/_search?pretty=1'  -d '
{
   "query" : {
      "text" : {
         "filename" : {
            "query" : "2012.01"
         }
      }
   }
}
'

# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "filename" : "My_third_file_created_at_2012.12.01.doc"
#             },
#             "_score" : 0.22097087,
#             "_index" : "files",
#             "_id" : "xmC51lIhTnWplOHADWJzaQ",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_first_file_created_at_2012.01.13.doc"
#             },
#             "_score" : 0.13137488,
#             "_index" : "files",
#             "_id" : "ZUezxDgQTsuAaCTVL9IJgg",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_second_file_created_at_2012.01.13.pdf"
#             },
#             "_score" : 0.13137488,
#             "_index" : "files",
#             "_id" : "XwLNnSlwSeyYtA2y64WuVw",
#             "_type" : "file"
#          }
#       ],
#       "max_score" : 0.22097087,
#       "total" : 3
#    },
#    "timed_out" : false,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 5,
#       "total" : 5
#    },
#    "took" : 5
# }

The first result has a date 2012.12.01 which isn't the best match for 2012.01. So to match only that exact phrase, we can do:

curl -XGET 'http://127.0.0.1:9200/files/file/_search?pretty=1'  -d '
{
   "query" : {
      "text_phrase" : {
         "filename" : {
            "query" : "2012.01",
            "analyzer" : "filename_index"
         }
      }
   }
}
'

# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "filename" : "My_first_file_created_at_2012.01.13.doc"
#             },
#             "_score" : 0.55737644,
#             "_index" : "files",
#             "_id" : "ZUezxDgQTsuAaCTVL9IJgg",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_second_file_created_at_2012.01.13.pdf"
#             },
#             "_score" : 0.55737644,
#             "_index" : "files",
#             "_id" : "XwLNnSlwSeyYtA2y64WuVw",
#             "_type" : "file"
#          }
#       ],
#       "max_score" : 0.55737644,
#       "total" : 2
#    },
#    "timed_out" : false,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 5,
#       "total" : 5
#    },
#    "took" : 7
# }

Or, if you still want to match all 3 files (because the user might remember some of the words in the filename, but in the wrong order), you can run both queries but increase the importance of the filename which is in the correct order:

curl -XGET 'http://127.0.0.1:9200/files/file/_search?pretty=1'  -d '
{
   "query" : {
      "bool" : {
         "should" : [
            {
               "text_phrase" : {
                  "filename" : {
                     "boost" : 2,
                     "query" : "2012.01",
                     "analyzer" : "filename_index"
                  }
               }
            },
            {
               "text" : {
                  "filename" : "2012.01"
               }
            }
         ]
      }
   }
}
'

# [Fri Feb 24 16:31:02 2012] Response:
# {
#    "hits" : {
#       "hits" : [
#          {
#             "_source" : {
#                "filename" : "My_first_file_created_at_2012.01.13.doc"
#             },
#             "_score" : 0.56892186,
#             "_index" : "files",
#             "_id" : "ZUezxDgQTsuAaCTVL9IJgg",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_second_file_created_at_2012.01.13.pdf"
#             },
#             "_score" : 0.56892186,
#             "_index" : "files",
#             "_id" : "XwLNnSlwSeyYtA2y64WuVw",
#             "_type" : "file"
#          },
#          {
#             "_source" : {
#                "filename" : "My_third_file_created_at_2012.12.01.doc"
#             },
#             "_score" : 0.012931341,
#             "_index" : "files",
#             "_id" : "xmC51lIhTnWplOHADWJzaQ",
#             "_type" : "file"
#          }
#       ],
#       "max_score" : 0.56892186,
#       "total" : 3
#    },
#    "timed_out" : false,
#    "_shards" : {
#       "failed" : 0,
#       "successful" : 5,
#       "total" : 5
#    },
#    "took" : 4
# }
Marcin Zablocki
  • 10,171
  • 1
  • 37
  • 47
DrTech
  • 17,031
  • 5
  • 54
  • 48
  • 21
    Wow, this is not just a solution. It's a tutorial I was looking for :D THX – Biggie Feb 26 '12 at 09:27
  • Thanks a lot for this great answer! I've just noticed that "text*" queries are deprecated in the most recent versions of elasticsearch and should be renamed to "match" and "match_phrase". – Jörn Aug 21 '14 at 23:48
  • Thanks a lot for this. It's being very useful so far (too bad the links are broken). I'm still a bit confused about few bits (e.g. I know the pattern is a RE but it's not clear what the `p{L}` is). I'm using it with a `match` query, the problem I'm seeing is that when I search only in the filename field it seems to work but it doesn't when using `_all` :(. Any idea? – Aldo 'xoen' Giambelluca Apr 15 '15 at 14:16
  • This works fine if the file names have word breakers. The problem is not all files have those. Actually lots of file names combine camilized words. For example, if you try and index OpenJDK.7z, now a user normally will eaither search for the full file name `openjdk` which will work with this analyzer, or will probably search for `jdk` which this analyzer will fail to return. – Zaid Amir Apr 15 '15 at 16:00
  • text_phrase" : { "filename" : { "boost" : 2, "query" : "2012.01", "analyzer" : "filename_index" } } Here you are using the "filename_index" which has edge ngram analyzer while searching..So wont that mean the text while searching will also be broken? – Anirudh Modi Nov 26 '15 at 10:26
  • 1
    @DrTech: Thanks for the great answer... I'm having some problem while searching . I'm getting an error on my sense plugin. `"type": "query_parsing_exception", "reason": "No query registered for [text]",` Did anyone else face the same error? – ASN Jun 27 '16 at 08:36
  • 1
    @ASN: change "text" to "match" and it should work fine – corvus Sep 07 '16 at 13:05
0

I believe this is because of the tokenizer being used..

http://www.elasticsearch.org/guide/reference/index-modules/analysis/lowercase-tokenizer.html

The lowercase tokenizer splits out on word boundaries so 2012.01.13 will be indexed as "2012","01" and "13". Searching for the string "2012.01.13" will obviously not match.

One option would be to add the tokenisation on search as well. Therefore, searching for "2012.01.13" will be tokenised down to the same tokens as in the index and it will match. This is also handy as you then don't need to always lowercase your searches in code.

The second option would be to use an n-gram tokenizer instead of the filter. This will mean that it will ignore word boundaries (and you will get the "_"'s as well), however you may have issues with case mismatches, which is presumably the reason you added the lowercase tokenizer in the first place.

rmtheis
  • 5,992
  • 12
  • 61
  • 78
  • To the 1st option: I thought that my filename_analyzer would already be used when indexing and searching, because I did not explicitly use index_analyzer/search_analyzer. To the 2nd option: I tried it this way. But the search has only results if I surround the keywords with `"*"`, for example: `"*2012*"`. Moreover `"*doc*"` founds both doc-files, but `"*.doc*"` founds only the docx-file. Any ideas? – Biggie Feb 24 '12 at 11:59
-2

I have no experience with ES, but in Solr you would need to specify the field type as text. Your field is of type string instead of text. String fields, are not analyzed, but stored and indexed verbatim. Give that a shot and see if it works.

properties": {
        "filename": {
          "type": "string",
          "analyzer": "filename_analyzer"
        }
Mikos
  • 8,455
  • 10
  • 41
  • 72
  • ES just uses type `string`, and these are analyzed by default. If you want them to be stored verbatim, you have to add `{"index":"not_analyzed"}` to the mapping – DrTech Feb 25 '12 at 12:31