5

I do understand there are 2 ways to set the encoding:

  1. By using Content-Type header.
  2. By using meta tags in HTML

Since Content-Type header is not mandatory and is required to be set explicitly (the server side can set it if it wants) and meta tag is also optional.

In case both of these are not present, how does the browser determine the encoding used for parsing the content?

phuclv
  • 37,963
  • 15
  • 156
  • 475
Vivek Kumar
  • 419
  • 4
  • 12
  • 1
    https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/ – CodeCaster Mar 31 '17 at 19:42

3 Answers3

7

They can guess it based on heuristic

I don't know how good are browsers today at encoding detection but MS Word did a very good job at it and recognizes even charsets I've never heard before. You can just open a *.txt file with random encoding and see.

This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection.

https://en.wikipedia.org/wiki/Charset_detection

Firefox uses the Mozilla Charset Detectors. The way it works is explained here and you can also change its heuristic preferences. The Mozilla Charset Detectors were even forked to uchardet which works better and detects more languages

[Update: As commented below, it moved to chardetng since Firefox 73]

Chrome previously used ICU detector but switched to CED almost 2 years ago


None of the detection algorithms are perfect, they can guess it incorrectly like this, because it's just guessing anyway!

This process is not foolproof because it depends on statistical data.

so that's how the famous Bush hid the facts bug occurred. Bad guessing also introduces a vulnerability to the system

For all those skeptics out there, there is a very good reason why the character encoding should be explicitly stated. When the browser isn't told what the character encoding of a text is, it has to guess: and sometimes the guess is wrong. Hackers can manipulate this guess in order to slip XSS past filters and then fool the browser into executing it as active code. A great example of this is the Google UTF-7 exploit.

http://htmlpurifier.org/docs/enduser-utf8.html#fixcharset-none

As a result, the encoding should always be explicitly stated.

phuclv
  • 37,963
  • 15
  • 156
  • 475
-1

I've encountered problem with output encoding of HTML. If you are creating website or webservice with .i.e nodejs or golang... and you're not sure just add Content-Type charset to header: For example in golang: resp.Header.Set("Content-Type", "text/html; charset=GB18030");

o0omycomputero0o
  • 3,316
  • 4
  • 31
  • 45
-2

It is set in the <head> like this:

<meta charset="UTF-8">

I think if this is not set in the head the browser will set a default encoding.

phuclv
  • 37,963
  • 15
  • 156
  • 475
Tony
  • 2,890
  • 1
  • 24
  • 35