As per this comment in a related thread, I'd like to know why Levenshtein distance based methods are better than Soundex.
-
I second the Metaphone / Double Metaphone suggestion – Feb 22 '10 at 20:08
4 Answers
Soundex is rather primitive - it was originally developed to be hand calculated. It results in a key that can be compared.
Soundex works well with western names, as it was originally developed for US census data. It's intended for phonetic comparison.
Levenshtein distance looks at two values and produces a value based on their similarity. It's looking for missing or substituted letters.
Basically Soundex is better for finding that "Schmidt" and "Smith" might be the same surname.
Levenshtein distance is better for spotting that the user has mistyped "Levnshtein" ;-)

- 150,284
- 78
- 298
- 434
I would suggest using Metaphone, not Soundex. As noted, Soundex was developed in the 19th century for American names. Metaphone will give you some results when checking the work of poor spellers who are "sounding it out", and spelling phonetically.
Edit distance is good at catching typos such as repeated letters, transposed letters, or hitting the wrong key.
Consider the application to decide which will fit your users best—or use both together, with Metaphone complementing the suggestions produced by Levenshtein.
With regard to the original question, I've used n-grams successfully in information retrieval applications.
-
and I'd go for double-metaphone, it returns 2 codes, one for western sounding, and another for 'foreign' (more slavic IIRC) sounds. – gbjbaanb Jan 01 '09 at 15:51
-
Soundex was developed in the early 20th century, and used for census data from the 19th century. – webmaster777 Dec 20 '17 at 08:51
I agree with you on Daitch-Mokotoff, Soundex is biased because the original US census takers wanted 'Americanized' names.
Maybe an example on the difference would help:
Soundex puts addition value in the start of a word - in fact it only considers the first 4 phonetic sounds. So while "Schmidt" and "Smith" will match "Smith" and "Wmith" won't.
Levenshtein's algorithm would be better for finding typos - one or two missing or replaced letters produces a high correlation, while the phonetic impact of those missing letters is less important.
I don't think either is better, and I'd consider both a distance algorithm and a phonetic one for helping users correct typed input.

- 150,284
- 78
- 298
- 434
As I posted on the other question, Daitch-Mokotoff is better for us Europeans (and I'd argue the US).
I've also read the Wiki on Levenshtein. But I don't see why (in real life) it's better for the user than Soundex.

- 1
- 1

- 6,735
- 5
- 31
- 33