I am having a little problem with the UTF-8 charset. I have a UTF-8 encoded file which I want to load and analyze. I am using BufferedReader to read the file line by line.
BufferedReader buffReader = new BufferedReader(new InputStreamReader
(new FileInputStream(file),"UTF-8"));
My problem is that the normals String methods (trim() and equals() for example) in Java are not suitable to use with the line read from the BufferReader in every iteration of the loop that I created to read all the content of the BufferedReader.
For example, in the encoded file, I have < menu >
which I want my program to treat it as it is, however, for now, it is seen as ?? < m e n u >
mixed with some others strange characters.
I want to know if there is a way to remove all the charset codifications and keep just the plain text so I can use all the methods of the String class without complications.
Thank you