7

I have some text files that're encoded by different character encodings, such as ascii, utf-8, big5, gb2312.

Now I want to know their accurate character encodings to view them with an text editor, otherwise, they will present garbled characters.

I searched online and found file command could display the character encoding of a file, like:

$ file -bi *
text/plain; charset=iso-8859-1
text/plain; charset=us-ascii
text/plain; charset=iso-8859-1
text/plain; charset=utf-8

Unfortunately, files encoded with big5 and gb2312 both present charset=iso-8859-1, so I still couldn't make a distinction. Is there a better way to check character encoding of a text file?

  • have you tried [uchardet](https://stackoverflow.com/a/34502845/5351549) or [enconv](https://linux.die.net/man/1/enconv)? – ewcz Feb 11 '18 at 21:19
  • @ewcz Thank you. They works. –  Feb 12 '18 at 05:51
  • 1
    You cannot reliably check encoding, you can only guess. `file` makes a bad guess while `uchardet` is better, but both are guessing. – n. m. could be an AI Feb 12 '18 at 06:03
  • I have a hard time believing you have ASCII-encoding files. It is far more likely to be happenstance that your file's current contents are limited to the C0 Controls and Basic Latin characters. If the file is indeed ASCII, perhaps you have a specification or standard that says so. Then you won't need guessing programs. – Tom Blodget Feb 13 '18 at 00:16
  • @TomBlodget I'm sorry. I don't understand what you mean. –  Feb 13 '18 at 01:48
  • 1
    When someone writes a text file, they choose a character encoding. That's almost never ASCII. If they were to choose ASCII, they would likely do so because of a specification or standard. In every case, the reader must use the same encoding to read the file. So, a specification or standard is one way to know which encoding is being used and you should have it available to you. Guessing is very sketchy. You might do so from a sample. But if a file is part of a repetitive process then the file might have different content in the future that could invalidate the guess. – Tom Blodget Feb 13 '18 at 03:45
  • I confirm that `uchardet` is better. It analyses the whole file (just tried with a 20GiB file) as opposed to `file` and `enca`. – tuxayo Jan 20 '20 at 01:53

2 Answers2

9

To some extent, @ewcz's advice works.

$ uchardet *
big5.txt: BIG5
conf: ASCII
gb2312-windows.txt: GB18030
gb.txt: GB18030
test.java: UTF-8

And

enca -L chinese *
big5.txt: Traditional Chinese Industrial Standard; Big5
conf: 7bit ASCII characters
gb2312-windows.txt: Simplified Chinese National Standard; GB2312
  CRLF line terminators
gb.txt: Simplified Chinese National Standard; GB2312
test.java: Universal transformation format 8 bits; UTF-8
  • 1
    The huge advantage of `uchardet` is that it analyses the whole file (just tried with a 20GiB file) as opposed to `file` and `enca` – tuxayo Jan 20 '20 at 01:50
1

You can use a command line tool like detect-file-encoding-and-language:

$ npm install -g detect-file-encoding-and-language

Then you can detect the encoding like so:

$ dfeal "/home/user name/Documents/subtitle file.srt"
# Possible result: { language: french, encoding: CP1252, confidence: { language: 0.99, encoding: 1 } }

Make sure you have Node.js and NPM installed! If you don't have it installed already:

$ sudo apt install nodejs npm
Falaen
  • 363
  • 4
  • 13
  • Running this command on a simple text file on macOS doesn't detect a language: `"language": null`. Did I miss something? – Boris May 16 '22 at 09:46
  • How big is your text file? This package can only reliably detect the language with text files of 500 words or more. – gignu May 29 '22 at 23:48