46

I have a PHP file that I created with VIM, but I'm not sure which is its encoding.

When I use the terminal and check the encoding with the command file -bi foo (My operating system is Ubuntu 11.04) it gives me the next result:

text/html; charset=us-ascii

But, when I open the file with gedit it says its encoding is UTF-8.

Which one is correct? I want the file to be encoded in UTF-8.

My guess is that there's no BOM in the file and that the command file -bi reads the file and doesn't find any UTF-8 characters, so it assumes that it's ascii, but in reality it's encoded in UTF-8.

ecantu
  • 713
  • 2
  • 7
  • 11

4 Answers4

74
$ file --mime my.txt 
my.txt: text/plain; charset=iso-8859-1
Green Lei
  • 3,150
  • 2
  • 20
  • 25
  • 8
    I find it important to note that, like @Celada has already mentioned, `file` cannot *grant* that it's detection is 100% correct. – Rui Pimentel Mar 18 '16 at 19:36
61

Well, first of all, note that ASCII is a subset of UTF-8, so if your file contains only ASCII characters, it's correct to say that it's encoded in ASCII and it's correct to say that it's encoded in UTF-8.

That being said, file typically only examines a short segment at the beginning of the file to determine its type, so it might be declaring it us-ascii if there are non-ASCII characters but they are beyond the initial segment of the file. On the other hand, gedit might say that the file is UTF-8 even if it's ASCII because UTF-8 is gedit's preferred character encoding and it intends to save the file with UTF-8 if you were to add any non-ASCII characters during your edit session. Again, if that's what gedit is saying, it wouldn't be wrong.

Now to your question:

  1. Run this command:

    tr -d \\000-\\177 < your-file | wc -c
    

    If the output says "0", then the file contains only ASCII characters. It's in ASCII (and it's also valid UTF-8) End of story.

  2. Run this command

    iconv -f utf-8 -t ucs-4 < your-file >/dev/null
    

    If you get an error, the file does not contain valid UTF-8 (or at least, some part of it is corrupted).

    If you get no error, the file is extremely likely to be UTF-8. That's because UTF-8 has properties that make it very hard to mistake typical text in any other commonly used character encoding for valid UTF-8.

Celada
  • 21,627
  • 4
  • 64
  • 78
30

(on Linux)

$ chardet <filename>

it also delivers the confidence level [0-1] of the output.

Arthur Zennig
  • 2,058
  • 26
  • 20
  • 1
    `chardet` seems to be a Python wrapper around `uchardet`, the "Universal" character encoding detector. `uchardet` is available on macos via Homebrew, although it doesn't give a confidence level. – Heath Raftery Sep 14 '22 at 05:31
0

Based on @Celada answer and the @Arthur Zennig, I have created this simple script:

#/bin/bash

if [ "$#" -lt 1 ]
then
  echo "Usage: utf8-check filename"
  exit 1
fi

chardet $1
countchars="$(tr -d \\000-\\177 < $1 | wc -c)"
if [ $countchars -eq 0 ]
then
 echo "Ascii";
 exit 0
fi

{
  iconv -f utf-8 -t ucs-4 < $1 >/dev/null
  echo "UTF-8"
} || {
  echo "not UTF-8 or corrupted"
}
Thiago Mata
  • 2,825
  • 33
  • 32