I have been using ASCII folding filter to handle diacritics for not just the documents in elastic search but various other kinds of strings.
public static String normalizeText(String text, boolean shouldTrim, boolean shouldLowerCase) {
if (Strings.isNullOrEmpty(text)) {
return text;
}
if (shouldTrim) {
text = text.trim();
}
if (shouldLowerCase) {
text = text.toLowerCase();
}
char[] charArray = text.toCharArray();
// once a character is normalized it could become more than 1 character. Official document says the output
// length should be of size >= length * 4.
char[] out = new char[charArray.length * 4 + 1];
int outLength = ASCIIFoldingFilter.foldToASCII(charArray, 0, out, 0, charArray.length);
return String.copyValueOf(out, 0, outLength);
}
However, as per the official documentation, the method has a note This API is for internal purposes only and might change in incompatible ways in the next release.
The alternative is to use foldToASCII(char[] input, int length)
non-static method (this method internally calls the same static method) but using it requires preparing ascii folding filter, token filter, token stream, an analyzer (this requires choosing the kind of analyzer and I might have to create a custom one). I couldn't find examples where the developers have done the latter.
I tried writing some solutions of my own, but non-static foldingToAscii doesn't return the exact output
, it attaches a list of unwanted characters in the end. I am wondering how various developers have dealt with this?
EDIT: I also see that some open source projects are using static foldToAscii so another question would be if it is really worth it to use non static foldToAscii