Octal can represent exactly a binary string of any length.
Although it's true that with 8 bit bytes and byte addressing,
hexadecimal seems more natural. Historically, however...
a lot of machines had 36 bit words, where octal made a lot of
sense, and
on the PDP-11 (on which the first C compilers ran), the
machine instructions were divided into 3 bit groups: the high
bit to flag whether the operation was on bytes or words, then
a 3 bit op-code, and two six bit addresses, with the first
3 bits the addressing mode and the second 3 the register
involved.
At the time when C was first being invented, octal was probably
more frequently used than hexadecimal, and so the authors of the
language provided for it. (I can't recall seeing it actually
used in recent code for a very long time, however.)