2

I have a SQLite FTS4 database, and I would like to be able to perform OR queries on it from user-input, e.g. if the user enters "hello there" I would like to do

SELECT * FROM fts_table WHERE text MATCHES 'hello OR there'.

However, if I pass simply pass in the user-string I get an implicit AND query instead. I could of course tokenize the string myself, and insert ORs, but then I'm using my own tokenizer which could differ from the tokenizer being used internally by SQLite's FTS.

Is there some way to either access the SQLite tokenizer from my C wrapper, or else to construct a SQL query that does this, along the lines of:

SELECT * FROM fts_table WHERE text MATCHES interpolate('hello there', ' OR ')

Thanks any pointers would be appreciated.

EDIT:

Any future readers of this question should note this other question: How to escape string for SQLite FTS query it's not safe to just interpolate OR into the string, because you will often end up with invalid strings, for instance if the user enters hello AND there interpolating OR will create hello OR AND OR there which is not valid. So

Community
  • 1
  • 1
wxs
  • 5,617
  • 5
  • 36
  • 51
  • 1
    As far as I can tell, there is no documented C API for the SQLite FTS extension. That's not so surprising, really, given that the C API doesn't provide low-level access to other internals, either. – John Bollinger Mar 04 '15 at 17:31
  • Yeah I sort of figured that. I guess one approach would be for me to write a custom tokenizer, which I pass in as the FTS tokenizer, and then I can also access that myself. – wxs Mar 04 '15 at 17:33
  • Yes, if you want to be absolutely certain that user input is tokenized by your program the same way it will be tokenized by SQLite, then a custom tokenizer seems a reasonable way to achieve that. – John Bollinger Mar 04 '15 at 17:35
  • @JohnBollinger see my answer below: it turns out SQLite does provide an API that lets you access their tokenizer. – wxs Mar 11 '15 at 20:00

1 Answers1

3

So to answer part of my question: it turns out that you can access a pointer to the SQLite tokenizer function via SELECT fts3_tokenizer(<tokenizer-name>). Refer to http://www.sqlite.org/fts3.html#section_8_1 for more details.

This returns a sqlite3_tokenizer_module struct, containing function pointers xCreate, xOpen, and xNext.

You'll need to add the header file fts3_tokenizer.h to your project as described here. http://www.mail-archive.com/sqlite-users%40sqlite.org/msg62644.html

I'm putting the definition of the sqlite3_tokenizer_module below (copied verbatim because it includes useful comments)

/*
** Structures used by the tokenizer interface. When a new tokenizer
** implementation is registered, the caller provides a pointer to
** an sqlite3_tokenizer_module containing pointers to the callback
** functions that make up an implementation.
**
** When an fts3 table is created, it passes any arguments passed to
** the tokenizer clause of the CREATE VIRTUAL TABLE statement to the
** sqlite3_tokenizer_module.xCreate() function of the requested tokenizer
** implementation. The xCreate() function in turn returns an 
** sqlite3_tokenizer structure representing the specific tokenizer to
** be used for the fts3 table (customized by the tokenizer clause arguments).
**
** To tokenize an input buffer, the sqlite3_tokenizer_module.xOpen()
** method is called. It returns an sqlite3_tokenizer_cursor object
** that may be used to tokenize a specific input buffer based on
** the tokenization rules supplied by a specific sqlite3_tokenizer
** object.
*/
typedef struct sqlite3_tokenizer_module sqlite3_tokenizer_module;
typedef struct sqlite3_tokenizer sqlite3_tokenizer;
typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor;

struct sqlite3_tokenizer_module {

  /*
  ** Structure version. Should always be set to 0 or 1.
  */
  int iVersion;

  /*
  ** Create a new tokenizer. The values in the argv[] array are the
  ** arguments passed to the "tokenizer" clause of the CREATE VIRTUAL
  ** TABLE statement that created the fts3 table. For example, if
  ** the following SQL is executed:
  **
  **   CREATE .. USING fts3( ... , tokenizer <tokenizer-name> arg1 arg2)
  **
  ** then argc is set to 2, and the argv[] array contains pointers
  ** to the strings "arg1" and "arg2".
  **
  ** This method should return either SQLITE_OK (0), or an SQLite error 
  ** code. If SQLITE_OK is returned, then *ppTokenizer should be set
  ** to point at the newly created tokenizer structure. The generic
  ** sqlite3_tokenizer.pModule variable should not be initialized by
  ** this callback. The caller will do so.
  */
  int (*xCreate)(
    int argc,                           /* Size of argv array */
    const char *const*argv,             /* Tokenizer argument strings */
    sqlite3_tokenizer **ppTokenizer     /* OUT: Created tokenizer */
  );

  /*
  ** Destroy an existing tokenizer. The fts3 module calls this method
  ** exactly once for each successful call to xCreate().
  */
  int (*xDestroy)(sqlite3_tokenizer *pTokenizer);

  /*
  ** Create a tokenizer cursor to tokenize an input buffer. The caller
  ** is responsible for ensuring that the input buffer remains valid
  ** until the cursor is closed (using the xClose() method). 
  */
  int (*xOpen)(
    sqlite3_tokenizer *pTokenizer,       /* Tokenizer object */
    const char *pInput, int nBytes,      /* Input buffer */
    sqlite3_tokenizer_cursor **ppCursor  /* OUT: Created tokenizer cursor */
  );

  /*
  ** Destroy an existing tokenizer cursor. The fts3 module calls this 
  ** method exactly once for each successful call to xOpen().
  */
  int (*xClose)(sqlite3_tokenizer_cursor *pCursor);

  /*
  ** Retrieve the next token from the tokenizer cursor pCursor. This
  ** method should either return SQLITE_OK and set the values of the
  ** "OUT" variables identified below, or SQLITE_DONE to indicate that
  ** the end of the buffer has been reached, or an SQLite error code.
  **
  ** *ppToken should be set to point at a buffer containing the 
  ** normalized version of the token (i.e. after any case-folding and/or
  ** stemming has been performed). *pnBytes should be set to the length
  ** of this buffer in bytes. The input text that generated the token is
  ** identified by the byte offsets returned in *piStartOffset and
  ** *piEndOffset. *piStartOffset should be set to the index of the first
  ** byte of the token in the input buffer. *piEndOffset should be set
  ** to the index of the first byte just past the end of the token in
  ** the input buffer.
  **
  ** The buffer *ppToken is set to point at is managed by the tokenizer
  ** implementation. It is only required to be valid until the next call
  ** to xNext() or xClose(). 
  */
  /* TODO(shess) current implementation requires pInput to be
  ** nul-terminated.  This should either be fixed, or pInput/nBytes
  ** should be converted to zInput.
  */
  int (*xNext)(
    sqlite3_tokenizer_cursor *pCursor,   /* Tokenizer cursor */
    const char **ppToken, int *pnBytes,  /* OUT: Normalized text for token */
    int *piStartOffset,  /* OUT: Byte offset of token in input buffer */
    int *piEndOffset,    /* OUT: Byte offset of end of token in input buffer */
    int *piPosition      /* OUT: Number of tokens returned before this one */
  );

  /***********************************************************************
  ** Methods below this point are only available if iVersion>=1.
  */

  /* 
  ** Configure the language id of a tokenizer cursor.
  */
  int (*xLanguageid)(sqlite3_tokenizer_cursor *pCsr, int iLangid);
};
wxs
  • 5,617
  • 5
  • 36
  • 51