2

I store Facebook comments in Elasticsearch 1.4.4. While indexing, I get sometimes error messages from Elasticsearch about immense terms:

java.lang.IllegalArgumentException: Document contains at least one immense term 
in field="message" (whose UTF8 encoding is longer than the max length 32766), 
all of which were skipped. Please correct the analyzer to not produce such terms.  
The prefix of the first immense term is: '[-40, -75, -39, -124, -39, -118, 32, -40, -89, -39, -124, -39, -124, -39, -121, 32, -40, -71, -39, -124, -39, -118, -39, -121, 32, -39, -120, -40, -77, -39]...', original message: bytes can be at most 32766 in length; got 40986

The reason should be that some UTF8-terms are longer than 32766 bytes (see also this SO-question).

I want to detect these messages and skip them for indexing or sanitize too big input messages. So I tried to check the byte size of failing UTF8-encoded Strings. But often it is much lower than the magic 32766 byte limit, f.ex.:

String failingMessage = "ﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺﷺ";
failingMessage.getBytes(StandardCharsets.UTF_8).length == 3728

So how can I prevent Elasticsearch to throw IllegalArgumentExceptions for this input? Is there a good way to sanitize an UTF8-Text for this type of long terms? Is my String-to-byte-size-approach wrong? (Long, usefull comments are very rare on Facebook, so it doesn't matter if I skip every too-long text)

The Elasticsearch analyzer I used to index the messagefield:

            "en_analyzer": {
                "type": "custom",
                "tokenizer": "icu_tokenizer",
                "filter": ["icu_folding", "icu_normalizer", "en_stop_filter", "en_stem_filter"]
            },
Community
  • 1
  • 1
Sonson123
  • 10,879
  • 12
  • 54
  • 72
  • 3
    I store Stack Overflow questions in Elasticsearch 1.4.4. I found your question, because now our Elasticsearch throws the same error (because of the error message your provided). How cool is that? – Jacket Mar 09 '15 at 13:55
  • *rofl* I wasn't aware how dangerous my question was. – Sonson123 Mar 09 '15 at 14:00
  • Strangely your term is 3726 bytes long, yet our ES instance cannot devour it too. And since we stop at every error, now our complete indexing has grinded to a halt. I will try to search for an answer to the issue after we fix our operations :) – Jacket Mar 09 '15 at 14:25
  • Strangest thing. Almost all of my analyzers convert this string into one-term arabic text exactly 40986 bytes long. That's what's my error message is saying too. See this screenshot - http://i.imgur.com/SOJ2xfQ.png. And here's the actual string - http://pastebin.com/YnpGMVDS – Jacket Mar 09 '15 at 14:44
  • Really interesting. I also think the text should be arabic (it comes from this [Facebook post](https://www.facebook.com/ManSoUrawyGdn/photos/a.231672773684866.1073741828.231651153687028/370553149796827/?type=1) - search for the comment from Abdo) but I think I know too little about Unicode. And sorry for stopping your crawler ;-). – Sonson123 Mar 09 '15 at 14:57

2 Answers2

1

I ended up work-arounding the problem in my indexing script, because I too couldn't find a way to predict the length of each term before passing it through all analyzers...

I know it's kind of a lame work-around, but at least it doesn't kill the whole indexer.

Before (PHP function using elasticsearch-php):

function elastic_bulk_operation($params){
    if(count($params) == 0){
        return true;
    }
    try{
        $client = new Elasticsearch\Client(['host' => ELASTIC_SEARCH_HOST]);
        $result = $client->bulk($params);
        foreach($result['items'] as $item){
            if($item['index']['error']){
                return false;
            }
        }
        return true;
    }catch(Exception $e){
        return false;
    }
    return true;
}

Now:

function elastic_bulk_operation($params){
    if(count($params) == 0){
        return true;
    }
    try{
        $client = new Elasticsearch\Client(['host' => ELASTIC_SEARCH_HOST]);
        $result = $client->bulk($params);
        foreach($result['items'] as $item){
            if($item['index']['error'] && strpos($item['index']['error'],"Document contains at least one immense term") === false){
                return false;
            }
        }
        return true;
    }catch(Exception $e){
        if(strpos($e->getMessage(),"Document contains at least one immense term") === false){
            return false;
        }
    }
    return true;
}
Jacket
  • 844
  • 10
  • 18
  • Thanx for sharing the code. (By the way, I hope Elasticsearch will improve the error handling in the future; comparing exceptions messages is a pain.) – Sonson123 Mar 23 '15 at 15:20
1

I've stumbled upon this question exactly for the same reason Jacket did earlier, quite funny.

Our crawler found this page, extracted the text, checked that it's not longer than 32766 bytes, but got the same error message when trying to index the document in elasticsearch.

Apparently the reason is that we have icu_normalizer and icu_folding filters in the analyzer for the field that stores page's content. Both of these filters expand Unicode liguratures, and unfortunately, the ligurature from the question U+FDFA ARABIC LIGATURE SALLALLAHOU ALAYHE WASALLAM expands to the string consisting of 33 bytes: "صلي الله عليه وسلم", resulting in the token of length 33 * 1242 = 40986 bytes! The following _analyze call confirms this:

$ curl '127.0.0.1:9200/_analyze' -d '{"tokenizer":"keyword","token_filters":["icu_folding"],"text":"ﷺ"}'
{"tokens":[{"token":"صلي الله عليه وسلم","start_offset":0,"end_offset":1,"type":"word","position":0}]}

For now we have solved it by replacing the ligature U+FDFA with its supposed text, but there are many other ligatures that should be handled as well (e.g. "ffi" => "ffi"), only that U+FDFA expands to such a long string that it was caught first.

Raman
  • 11
  • 2