19

Seems to be a fairly hit issue, but I've not yet been able to find a solution; perhaps because it comes in so many flavors. Here it is though. I'm trying to read some comma delimited files (occasionally the delimiters can be a little bit more unique than commas, but commas will suffice for now).

The files are supposed to be standardized across the industry, but lately we've seen many different types of character set files coming in. I'd like to be able to set up a BufferedReader to compensate for this.

What is a pretty standard way of doing this and detecting whether it was successful or not?

My first thoughts on this approach are to loop through character sets simple->complex until I can read the file without an exception. Not exactly ideal though...

Thanks for your attention.

Kirk
  • 618
  • 4
  • 7
  • 21
  • 2
    Detecting encodings is a very hard problem, and for some encodings, the only way to know one of them is right is through contextual analysis (which is a very non-trivial task). If you know exactly which encodings you need to support (e.g. UTF-16, UTF-8, ISO-8859-1), it may become easier, but it depends on what those encodings are. – Michael Madsen Feb 07 '12 at 18:17
  • 3
    if you don't get an exception does not necessarily mean that it was successful – MozenRath Feb 07 '12 at 18:17
  • the thing you mentioned about industry standards, it is the only thing you should be working on implementing more strictly. you can use the `-Dfile.encoding` as a jvm arg to support only a particular type of encoding – MozenRath Feb 07 '12 at 18:20
  • In the industry I'm in, I only have power over the standards when I create data. It sucks, but its the way it is. I can't do anything to enforce the standards. In an ideal world this would be different. --- Anyhow, programs like notepad++ (which isn't java as far as I know) seem to be able to do a better job than I can. I'd like to support ANSI, UTF-8, UTF-16, USC-2 (big & little) endian. Anything outside of that is beyond my current scope. – Kirk Feb 07 '12 at 18:22
  • i would then suggest that you run the native2ascii tool on all the files before processing them. then you wont have to worry about this issue with java IO – MozenRath Feb 07 '12 at 18:49

1 Answers1

11

The Mozilla's universalchardet is supposed to be the efficient detector out there. juniversalchardet is the java port of it. There is one more port. Read this SO for more information Character Encoding Detection Algorithm

Community
  • 1
  • 1
Aravind Yarram
  • 78,777
  • 46
  • 231
  • 327