They can guess it based on heuristic
I don't know how good are browsers today at encoding detection but MS Word did a very good job at it and recognizes even charsets I've never heard before. You can just open a *.txt file with random encoding and see.
This algorithm usually involves statistical analysis of byte patterns, like frequency distribution of trigraphs of various languages encoded in each code page that will be detected; such statistical analysis can also be used to perform language detection.
https://en.wikipedia.org/wiki/Charset_detection
Firefox uses the Mozilla Charset Detectors. The way it works is explained here and you can also change its heuristic preferences. The Mozilla Charset Detectors were even forked to uchardet which works better and detects more languages
[Update: As commented below, it moved to chardetng since Firefox 73]
Chrome previously used ICU detector but switched to CED almost 2 years ago
None of the detection algorithms are perfect, they can guess it incorrectly like this, because it's just guessing anyway!
This process is not foolproof because it depends on statistical data.
so that's how the famous Bush hid the facts bug occurred. Bad guessing also introduces a vulnerability to the system
For all those skeptics out there, there is a very good reason why the character encoding should be explicitly stated. When the browser isn't told what the character encoding of a text is, it has to guess: and sometimes the guess is wrong. Hackers can manipulate this guess in order to slip XSS past filters and then fool the browser into executing it as active code. A great example of this is the Google UTF-7 exploit.
http://htmlpurifier.org/docs/enduser-utf8.html#fixcharset-none
As a result, the encoding should always be explicitly stated.