8

Scroll to the end to skip the explanation.

Background

In my Android app, I want to use non-English Unicode text strings to search for matches in text documents/fields that are stored in a SQLite database. I've learned (so I thought) that what I need to do is implement a Full Text Search with fts3/fts4, so that is what I have been working on learning for the past couple days. FTS is supported by Android, as is shown in the documentation Storing and Searching for Data and in the blog post Android Quick Tip: Using SQLite FTS Tables.

Problem

Everything was looking good, but then I read the March 2012 blog post The sorry state of SQLite full text search on Android, which said

The first step when building a full text search index is to break down the textual content into words, aka tokens. Those tokens are then entered into a special index which lets SQLite perform very fast searches based on a token (or a set of tokens).

SQLite has two built-in tokenizers, and they both only consider tokens consisting of US ASCII characters. All other, non-US ASCII characters are considered whitespace.

After that I also found this StackOverflow answer by @CL. (who, based on tags and reputation, appears to be an expert on SQLite) replying to a question about matching Vietnamese letters with different diacritics:

You must create the FTS table with a tokenizer that can handle Unicode characters, i.e., ICU or UNICODE61.

Please note that these tokenizers might not be available on all Android versions, and that the Android API does not expose any functions for adding user-defined tokenizers.

This 2011 SO answer seems to confirm that Android does not support tokenizers beyond the two basic simple and porter ones.

This is 2015. Are there any updates to this situation? I need to have the full text search supported for everyone using my app, not just people with new phones (even if the newest Android version does support it now).

Potential partial solution?

I find it hard to believe that FTS does not work at all with Unicode. The documentation for the simple tokenizer says

A term is a contiguous sequence of eligible characters, where eligible characters are all alphanumeric characters and all characters with Unicode codepoint values greater than or equal to 128. All other characters are discarded when splitting a document into terms. Their only contribution is to separate adjacent terms. (emphasis added)

That gives me hope that some basic Unicode functionality could still be supported in Android, even if things like capitalization and diacritics (and various other equivalent letter forms that have different Unicode code points) are not supported.

My Main Question

Can I use SQLite FTS in Android with non-English Unicode text (codepoints > 128) if I am only using literal Unicode string tokens separated by spaces? (That is, I am searching for exact strings that occur in the text.)

Updates

Community
  • 1
  • 1
Suragch
  • 484,302
  • 314
  • 1,365
  • 1,393
  • Hi I found unicode61 tokenizer does not support CJK at all, see my question here https://stackoverflow.com/questions/52422437/why-sqlite-fts5-unicode61-tokenizer-does-not-support-cjkchinese-japanese-korean I ended up building an ICU sqlite version myself. I was wondering how did you do FTS for CJK ? – Qiulang Oct 08 '18 at 06:40
  • @Qiulang, when I did it I was only handling traditional Mongolian text, not CJK. (See my supplemental answer below for how I went about it. But it won't help for Chinese.) If I were to do this for CJK I would look at `BreakIterator.getWordInstance()` to break sentences into words and then build the FTS table from that. See my answer [here](https://stackoverflow.com/questions/42219292/how-does-breakiterator-work-in-android) about `BreakIterator`. – Suragch Oct 08 '18 at 08:31
  • Thanks for the answer. So your solution relies on ICU, right ? – Qiulang Oct 08 '18 at 08:34
  • @Qiulang, the ICU version of `BreakIterator` is only available from Android API version 24, so no, my answer doesn't rely on ICU. I imported `java.text.BreakIterator` instead. There are more details in [my answer](https://stackoverflow.com/a/42219474/3681880). – Suragch Oct 08 '18 at 08:40
  • I am not doing Android development so I didn't read your answer carefully sorry, I will examine it. But I suspect it may have the same problem as I described here https://stackoverflow.com/a/31396975/301513 – Qiulang Oct 08 '18 at 08:45

2 Answers2

5

Supplemental Answer

I ended up doing what @CL recommended and was able to successfully implement Full Text Search with Unicode. These are the basic steps I followed:

  1. Replace all Unicode characters (>= 128) that are not parts of words with the space character.
  2. (optional) Replace specific characters with more general ones. For example, ē, è, and é could all be replaced with e (if this sort of generalized search is desired). This is not necessary, but if you don't do this, then searching for é will only return documents with é, and searching for e will only return documents with e (and not é).
  3. Populate the virtual FTS table using the modified text created in steps 1 and 2.
  4. Populate your normal table with unmodified text. The schema and number of documents must be the same as when you created the FTS table, of course.
  5. Link the virtual FTS table with your normal text table/column using an external content table so that you are not storing a copy of the modified text, only the document ids that were created from that text.

Please read Full text search example in Android for instructions in how to create the FTS table and link it to the normal table. This took a long time to figure out but in the end it made very fast full text searches even for a very large number of documents.

If you need more details please leave a comment below.

Community
  • 1
  • 1
Suragch
  • 484,302
  • 314
  • 1,365
  • 1,393
3

Unicode characters are handled like 'normal' letters, so you can use them in FTS data and search terms. (Prefix searches should work, too.)

The problem is that Unicode characters are not normalized, i.e., all characters are treated as letters (even if they actually are punctuation (―†), or other non-letter characters (☺♫)), and that upper/lowercase are not merged, and that diacritics are not removed.
If you want to handle those cases correctly, you have to do these normalizations manually before you insert the documents into the database, and before you use the search terms.

CL.
  • 173,858
  • 17
  • 217
  • 259
  • By "do these normalizations manually" as far as punctuation goes, do you mean substitute Unicode punctuation with ASCII punctuation that the simple tokenizer will recognize? – Suragch Apr 25 '15 at 07:17
  • The tokenizer *ignores* punctuation; such characters just mark boundaries between words. – CL. Apr 25 '15 at 09:22
  • I mean the Unicode punctuation beyond the 128 range, where the simple tokenizer does not know that they are actually punctuation marks and treats them like normal letters within the token. What did you mean by manual normalization for those characters? – Suragch Apr 25 '15 at 10:16
  • 1
    If you want to treat Unicode punctuation like ASCII punctuation, you can convert them to some ASCII punctuation character, or just remove them. – CL. Apr 25 '15 at 11:19