55

We see Google, Firefox some AJAX pages show up a list of probable items while user types characters.

Can someone give a good algorithm, data structure for implementing autocomplete?

Saheb
  • 1,392
  • 3
  • 13
  • 33
subbul
  • 947
  • 1
  • 7
  • 14

5 Answers5

63

A trie is a data structure that can be used to quickly find words that match a prefix.

Edit: Here's an example showing how to use one to implement autocomplete http://rmandvikar.blogspot.com/2008/10/trie-examples.html

Here's a comparison of 3 different auto-complete implementations (though it's in Java not C++).

* In-Memory Trie
* In-Memory Relational Database
* Java Set

When looking up keys, the trie is marginally faster than the Set implementation. Both the trie and the set are a good bit faster than the relational database solution.

The setup cost of the Set is lower than the Trie or DB solution. You'd have to decide whether you'd be constructing new "wordsets" frequently or whether lookup speed is the higher priority.

These results are in Java, your mileage may vary with a C++ solution.

Dan Bechard
  • 5,104
  • 3
  • 34
  • 51
Glen
  • 21,816
  • 3
  • 61
  • 76
  • 2
    Somewhat related is Google's Peter Norvig's description of how to do spelling correction: http://norvig.com/spell-correct.html – Nate Kohl Nov 23 '09 at 15:23
  • 3
    A standard Trie is very memory intensive, for larger sets you wanna use a Compacted Trie which greatly reduces the memory footprint. Additional optimisations encompass lazy initialisation of node values and the right data structures for the children/value sets. A while ago I created a [autocomplete library](https://github.com/fmmfonseca/completely) capable of handling very large data sets (10,000,000+) and efficiently answers exact and approximate searches. – Filipe Miguel Fonseca Jun 23 '15 at 14:47
22

For large datasets, a good candidate for the backend would be Ternary search trees. They combine the best of two worlds: the low space overhead of binary search trees and the character-based time efficiency of digital search tries.

See in Dr. Dobbs Journal: http://www.ddj.com/windows/184410528

The goal is the fast retrieval of a finite resultset as the user types in. Lets first consider that to search "computer science" you can start typing from "computer" or "science" but not "omputer". So, given a phrase, generate the sub-phrases starting with a word. Now for each of the phrases, feed them into the TST (ternary search tree). Each node in the TST will represent a prefix of a phrase that has been typed so far. We will store the best 10 (say) results for that prefix in that node. If there are many more candidates than the finite amount of results (10 here) for a node, there should be a ranking function to resolve competition between two results.

The tree can be built once every few hours, depending on the dynamism of the data. If the data is in real time, then I guess some other algorithm will give a better balance. In this case, the absolute requirement is the lightning-fast retrieval of results for every keystroke typed which it does very well.

More complications will arise if the suggestion of spelling corrections is involved. In that case, the edit distance algorithms will have to be considered as well.

For small datasets like a list of countries, a simple implementation of Trie will do. If you are going to implement such an autocomplete drop-down in a web application, the autocomplete widget of YUI3 will do everything for you after you provide the data in a list. If you use YUI3 as just the frontend for an autocomplete backed by large data, make the TST based web services in C++, and then use script node data source of the autocomplete widget to fetch data from the web service instead of a simple list.

Saheb
  • 1,392
  • 3
  • 13
  • 33
Joy Dutta
  • 3,416
  • 1
  • 19
  • 19
7

Segment trees can be used for efficiently implementing auto complete

r15habh
  • 1,468
  • 3
  • 19
  • 31
6

If you want to suggest the most popular completions, a "Suggest Tree" may be a good choice: Suggest Tree

Nicolai
  • 61
  • 1
  • 1
3

For a simple solution : you generate a 'candidate' with a minimum edit (Levenshtein) distance (1 or 2) then you test the existence of the candidate with a hash container (set will suffice for a simple soltion, then use unordered_set from the tr1 or boost).

Example: You wrote carr and you want car. arr is generated by 1 deletion. Is arr in your unordered_set ? No. crr is generated by 1 deletion. Is crr in your unordered_set ? No. car is generated by 1 deletion. Is car in your unordered_set ? Yes, you win.

Of course there's insertion, deletion, transposition etc...

You see that your algorithm for generating candidates is really where you’re wasting time, especially if you have a very little unordered_set.

anno
  • 5,970
  • 4
  • 28
  • 37