13

Java provides ways for writing numeric literals in the bases 2, 8, 10 and 16.

I am wondering why base 8 is included, e.g. int x = 0123;?

I am thinking that there might be something akin to the fact that in hexadecimal the capacity of one byte is FF+1, and so forth.

Zabuzard
  • 25,064
  • 8
  • 58
  • 82
jokinjo
  • 147
  • 1
  • 6
  • 7
    I believe your question is "What are octal numbers used for?" – whatamidoingwithmylife Oct 28 '19 at 07:38
  • Yes, that is how I should have worded the question. Thanks for all of the interesting answers. I thought that it might have a historical basis, but I have not had the experience necessary to put my finger on it. – jokinjo Oct 28 '19 at 12:19

4 Answers4

15

This answer was written for the original question, "Why is writing a number in base 8 useful?"

It was to make the language familiar to those who knew C etc. Then the question is why support it in those!

There were architectures (various PDPs) which used 18 bit wide words (and others used 36 bit words), so literals where the digit is 3 bits wide would be useful.

Practically, the only place I have seen it used in Java code is for specifying unix-style permissions, e.g. 0777, 0644 etc.

(The tongue-in-cheek answer to why it is supported is "to get upvotes on this question").

Andy Turner
  • 137,514
  • 11
  • 162
  • 243
  • Oh, good point, I forgot about file permissions in Linux! Minimal use of available bits... octal fits the bill. – sarlacii Oct 28 '19 at 07:54
  • 1
    @sarlacii "minimal use" has nothing to do with it, the "octal"-ness of an int literal only exists in the source code. It is simply easier to grok the permissions denoted by an octal literal because the digits directly correspond to user, group and other. – Andy Turner Oct 28 '19 at 07:58
  • I did my first programming on a mainframe computer that had 60 bits words and 6 bits character set. Addresses were 18 bits. Of course we used octal. When I first met hexadecimal it was quite peculiar to see letters in a number. – Ole V.V. Oct 28 '19 at 09:07
  • @Andy I still think of it as minimal use though... the choice of bit length is not only code-related. It is also limited by resource. eg. I might choose to use 8 bits to encode each permission, masking out the bits not used, requiring 3 bytes, or rather use the minimum, 3 bits, and then encode octal numbers, to only use 1 and a half bytes (using the 4th bit as sticky). But yes, it's arbitrary once you go to bit level... as you could read the combined octal value as hex anyway. – sarlacii Oct 28 '19 at 13:14
6

"The octal numbers are not as common as they used to be. However, Octal is used when the number of bits in one word is a multiple of 3. It is also used as a shorthand for representing file permissions on UNIX systems and representation of UTF8 numbers, etc."

From: https://www.tutorialspoint.com/octal-number-system

yur
  • 380
  • 2
  • 15
  • 3
    Not really "number of bits in a word" - as it was used on 16-bit minicomputers in the form of a high bit 0/1 and then 5 octal number. A better description might be: Used when the size of a _field_ in a word is a multiple of 8 bits. Those same computers had (frequently) 8 registers - so a register specified in an instruction would have a 3 bit field to name a register, a register-to-register instruction would have two such 3-bit fields; those same computers frequently had a 3-bit opcode, or a 3-bit opcode with an "escape" (e.g., 0111) to a _6-bit_ opcode; etc. etc. – davidbak Oct 28 '19 at 17:01
3

Historicy of computer (science). To represent a goup of bits a base 10 does not fit, base 8 = 23 for 3 bits, and base 16 = 24 for 4 bits fit better.

The advantage of base 8 is that all digits are really digits: 0-7, whereas base 16 has "digits" 0-9A-F.

For 8 bits of a byte base 16 (hexadecimal) is a better fit, and won. For Unix base 8 octal, often still is used for rwx bits (read, write, execute) for user, group and others; hence octal numbers like 0666 or 0777.

Hexadecimal is ubiquitous, not the least because of computers' word sizes nowadays are multiple bytes. That the 8 bit byte became a standard is an other, tough related story (23 bits, and addressing).

Joop Eggen
  • 107,315
  • 7
  • 83
  • 138
0

Original answer for "What are octal numbers (base 8) used for?"

Common Usage of Octal

  • As an abbreviation of binary: For computing machines (such as UNIVAC 1050, PDP-8, ICL 1900, etc.), Octal has been used as an abbreviation of binary because their word size is divisible by three (each octal digit represents three binary digits). So two, four, eight or twelve digits could concisely display an entire machine word. It also cut costs by allowing Nixie tubes, seven-segment displays, and calculators to be used for the operator consoles, where binary displays were too complex to use, decimal displays needed complex hardware to convert radices, and hexadecimal displays needed to display more numerals.

  • 16-, 32-, or 62-bit words representation: All modern computing platforms use 16-, 32-, or 64-bit words, further divided into eight-bit bytes. On such systems, three octal digits per byte would be required, with the most significant octal digit representing two binary digits (plus one bit of the next significant byte, if any). Octal representation of a 16-bit word requires 6 digits, but the most significant octal digit represents (quite inelegantly) only one bit (0 or 1). This representation offers no way to easily read the most significant byte because it's smeared over four octal digits. Therefore, hexadecimal is more commonly used in programming languages today, since two hexadecimal digits exactly specify one byte. Some platforms with a power-of-two word size still have instruction subwords that are more easily understood if displayed in octal; this includes the PDP-11 and Motorola 68000 family. The modern-day ubiquitous x86 architecture belongs to this category as well, but octal is rarely used on this platform.

  • Encoding descriptions: Certain properties of the binary encoding of opcodes in modern x86 architecture become more readily apparent when displayed in octal, e.g. the ModRM byte, which is divided into fields of 2, 3, and 3 bits, so octal can be useful in describing these encodings.

  • Computations and File access Permissions: Octal is sometimes used in computing instead of hexadecimal, perhaps most often in modern times in conjunction with file permissions under Unix systems (In permission access to chmod). It has the advantage of not requiring any extra symbols as digits (the hexadecimal system is base-16 and therefore needs six additional symbols beyond 0–9).

  • Digital Displays: Octal numbers are also used in displaying digital content onto a screen since it has less number of symbols used for representation.

  • Graphical representation of byte strings: Some programming languages (C, Perl, Postscript, etc.) have a representation of texts/graphics in Octal with escaped as \nnn. Octal representation is particularly handy with non-ASCII bytes of UTF-8, which encodes groups of 6 bits, and where any start byte has octal value \3nn and any continuation byte has octal value \2nn.

  • Early Floating-Point Arithmetics: Octal was also used for floating-point in the Ferranti Atlas (1962), Burroughs B5500 (1964), Burroughs B5700 (1971), Burroughs B6700 (1971) and Burroughs B7700 (1972) computers.

  • In Transponders: Aircraft transmit a code, expressed as a four-octal-digit number when interrogated by ground radar. This code is used to distinguish different aircraft on the radar screen.

Further Readings: https://en.wikipedia.org/wiki/Octal

Jishan Shaikh
  • 1,572
  • 2
  • 13
  • 31
  • 1
    I enjoyed reading the Wikipedia entry, especially the bit about using the spaces between ones fingers to count. However, for the most part, the article could be said to be the impetus of my question. As I suspected, indivisibility of the word sizes used in current computer architecture renders the octal representation not useful, and one of the paragraphs that you cite as "16-, 32-, 64-bit words…" is not clear at all. Could you please explain how “the most significant byte is smeared over four octal digits” and “Some platforms with a power-of-two word size still have instruction subwords…” – jokinjo Nov 30 '19 at 09:17