As I was asked in the comments how I would solve this, I'll write it as a response.
Being in such a situation suggests mistakes in the application design. Consider what that means.
You have a text of which you cannot specify the length in advance, and which can be extremely long (up to 64k), of which you want to keep uniqueness. Imagine such an amount of data split into separate keys, and composing a composite index to generate uniqueness. This is what you're trying to do. For integers, this would be an index of 16000 integers, joined in a composite index.
Consider further that CHARACTER type fields (CHAR, VARCHAR, TEXT) underly interpretation by encoding, which further complicates the issue.
I'd highly recommend splitting the data up somehow. This not only frees the DBMS from incorporating variable length character blocks, but also might give some possibility of generating composite keys over parts of the data. Maybe you could even find a better storage solution for your data.
If you have questions, I'd suggest posting the table and/or database structure and explaining what logical data the TEXT field contains, and why you think it would need to be unique.