39

Is there a simple way I could use any subclass of Lucene's Analyzer to parse/tokenize a String?

Something like:

String to_be_parsed = "car window seven";
Analyzer analyzer = new StandardAnalyzer(...);
List<String> tokenized_string = analyzer.analyze(to_be_parsed);
Felipe Hummel
  • 4,674
  • 5
  • 32
  • 35
  • That's a pretty vague question you're asking. The answer is "Yes". But it depends a lot on *how* you want to parse/tokenize said string. – stevevls Jun 13 '11 at 18:39
  • @stevevls added an example. I used List but it doesn't have to be necessarly a List. – Felipe Hummel Jun 13 '11 at 18:44

4 Answers4

57

Based off of the answer above, this is slightly modified to work with Lucene 4.0.

public final class LuceneUtil {

  private LuceneUtil() {}

  public static List<String> tokenizeString(Analyzer analyzer, String string) {
    List<String> result = new ArrayList<String>();
    try {
      TokenStream stream  = analyzer.tokenStream(null, new StringReader(string));
      stream.reset();
      while (stream.incrementToken()) {
        result.add(stream.getAttribute(CharTermAttribute.class).toString());
      }
    } catch (IOException e) {
      // not thrown b/c we're using a string reader...
      throw new RuntimeException(e);
    }
    return result;
  }

}
Parvin Gasimzade
  • 25,180
  • 8
  • 56
  • 83
Ben McCann
  • 18,548
  • 25
  • 83
  • 101
40

As far as I know, you have to write the loop yourself. Something like this (taken straight from my source tree):

public final class LuceneUtils {

    public static List<String> parseKeywords(Analyzer analyzer, String field, String keywords) {

        List<String> result = new ArrayList<String>();
        TokenStream stream  = analyzer.tokenStream(field, new StringReader(keywords));

        try {
            while(stream.incrementToken()) {
                result.add(stream.getAttribute(TermAttribute.class).term());
            }
        }
        catch(IOException e) {
            // not thrown b/c we're using a string reader...
        }

        return result;
    }  
}
stevevls
  • 10,675
  • 1
  • 45
  • 50
2

Even better by using try-with-resources! This way you don't have to explicitly call .close() that is required in higher versions of the library.

public static List<String> tokenizeString(Analyzer analyzer, String string) {
  List<String> tokens = new ArrayList<>();
  try (TokenStream tokenStream  = analyzer.tokenStream(null, new StringReader(string))) {
    tokenStream.reset();  // required
    while (tokenStream.incrementToken()) {
      tokens.add(tokenStream.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
    new RuntimeException(e);  // Shouldn't happen...
  }
  return tokens;
}

And the Tokenizer version:

  try (Tokenizer standardTokenizer = new HMMChineseTokenizer()) {
    standardTokenizer.setReader(new StringReader("我说汉语说得很好"));
    standardTokenizer.reset();
    while(standardTokenizer.incrementToken()) {
      standardTokenizer.getAttribute(CharTermAttribute.class).toString());
    }
  } catch (IOException e) {
      new RuntimeException(e);  // Shouldn't happen...
  }
Marsellus Wallace
  • 17,991
  • 25
  • 90
  • 154
2

The latest best practices, as another Stack Overflow answer indicates, seems to be to add an attribute to the token stream and later access that attribute, rather than getting an attribute directly from the token stream. And for good measure you can make sure the analyzer gets closed. Using the very latest Lucene (currently v8.6.2) the code would look like this:

String text = "foo bar";
String fieldName = "myField";
List<String> tokens = new ArrayList();
try (Analyzer analyzer = new StandardAnalyzer()) {
  try (final TokenStream tokenStream = analyzer.tokenStream(fieldName, text)) {
    CharTermAttribute charTermAttribute = tokenStream.addAttribute(CharTermAttribute.class);
    tokenStream.reset();
    while(tokenStream.incrementToken()) {
      tokens.add(charTermAttribute.toString());
    }
    tokenStream.end();
  }
}

After that code is finished, tokens will contain a list of parsed tokens.

See also: Lucene Analysis Overview.

Caveat: I'm just starting to write Lucene code, so I don't have a lot of Lucene experience. I have taken the time to research the latest documentation and related posts, however, and I believe that the code I've placed here follows the latest recommended practices slightly better than the current answers.

Garret Wilson
  • 18,219
  • 30
  • 144
  • 272