I do admit that this is a borderline case of being a good answer to my question, but as I have researched the problem somewhat, this answer both describes the approach I chose and gives some more information on the nature of the problem should someone bump into it.
"The right answer" a.k.a. final algorithm
What I ended up with is a variant of what I describe in the question. First, each glyph is split into trits 0, 1, and intermediate. This ternary information is then compressed with a 256-slot static dictionary. Each item in the dictionary (or look-up table) is a binary encoded string (0=0, 10=1, 11=intermediate) with a single 1 added to the most significant end.
The grayscale data (for the intermediate trits) is interspersed between the references to the look-up table. So, the data essentially looks like this:
<LUT reference><gray value><gray value><LUT reference>...
The number of gray scale values naturally depends on the number of intermediate trits in the ternary data looked up from the static dictionary.
Decompression code is very short and can easily be written as a state machine with only one pointer and one 32-bit variable giving the state. Something like this:
static uint32_t trits_to_decode;
static uint8_t *next_octet;
/* This should be called when starting to decode a glyph
data : pointer to the compressed glyph data */
void start_glyph(uint8_t *data)
{
next_octet = data; // set the pointer to the beginning of the glyph
trits_to_decode = 1; // this triggers reloading a new dictionary item
}
/* This function returns the next 8-bit pixel value */
uint8_t next_pixel(void)
{
uint8_t return_value;
// end sentinel only? if so, we are out of ternary data
if (trits_to_decode == 1)
// get the next ternary dictionary item
trits_to_decode = dictionary[*next_octet++];
// get the next pixel from the ternary word
// check the LSB bit(s)
if (trits_to_decode & 1)
{
trits_to_decode >>= 1;
// either full value or gray value, check the next bit
if (trits_to_decode & 1)
{
trits_to_decode >>= 1;
// grayscale value; get next from the buffer
return *next_octet++;
}
// if we are here, it is a full value
trits_to_decode >>= 1;
return 255;
}
// we have a zero, return it
trits_to_decode >>= 1;
return 0;
}
(The code has not been tested in exactly this form, so there may be typos or other stupid little errors.)
There is a lot of repetition with the shift operations. I am not too worried, as the compiler should be able to clean it up. (Actually, left shift could be even better, because then the carry bit could be used after shifting. But as there is no direct way to do that in C, I don't bother.)
One more optimization relates to the size of the dictionary (look-up table). There may be short and long items, and hence it can be built to support 32-bit, 16-bit, or 8-bit items. In that case the dictionary has to be ordered so that small numerical values refer to 32-bit items, middle values to 16-bit items and large values to 8-bit items to avoid alignment problems. Then the look-up code looks like this:
static uint8_t dictionary_lookup(uint8_t octet)
{
if (octet < NUMBER_OF_32_BIT_ITEMS)
return dictionary32[octet];
if (octet < NUMBER_OF_32_BIT_ITEMS + NUMBER_OF_16_BIT_ITEMS)
return dictionary16[octet - NUMBER_OF_32_BIT_ITEMS];
return dictionary8[octet - NUMBER_OF_16_BIT_ITEMS - NUMBER_OF_32_BIT_ITEMS];
}
Of course, if every font has its own dictionary, the constants will become variables looked up form the font information. Any half-decent compiler will inline that function, as it is called only once.
If the number of quantization levels is reduced, it can be handled, as well. The easiest case is with 4-bit gray levels (1..14). This requires one 8-bit state variable to hold the gray levels. Then the gray level branch will become:
// new state value
static uint8_t gray_value;
...
// new variable within the next_pixel() function
uint8_t return_value;
...
// there is no old gray value available?
if (gray_value == 0)
gray_value = *next_octet++;
// extract the low nibble
return_value = gray_value & 0x0f;
// shift the high nibble into low nibble
gray_value >>= 4;
return return_value;
This actually allows using 15 intermediate gray levels (a total of 17 levels), which maps very nicely into linear 255-value system.
Three- or five-bit data is easier to pack into a 16-bit halfword and set MSB always one. Then the same trick as with the ternary data can be used (shift until you get 1).
It should be noted that the compression ratio starts to deteriorate at some point. The amount of compression with the ternary data does not depend on the number of gray levels. The gray level data is uncompressed, and the number of octets scales (almost) linearly with the number of bits. For a typical font the gray level data at 8 bits is 1/2 .. 2/3 of the total, but this is highly dependent on the typeface and size.
So, reduction from 8 to 4 bits (which is visually quite imperceptible in most cases) reduces the compressed size typically by 1/4..1/3, whereas the further reduction offered by going down to three bits is significantly less. Two-bit data does not make sense with this compression algorithm.
How to build the dictionary?
If the decompression algorithm is very straightforward and fast, the real challenges are in the dictionary building. It is easy to prove that there is such thing as an optimal dictionary (dictionary giving the least number of compressed octets for a given font), but wiser people than me seem to have proven that the problem of finding such dictionary is NP-complete.
With my arguably rather lacking theoretical knowledge on the field I thought there would be great tools offering reasonably good approximations. There might be such tools, but I could not find any, so I rolled my own mickeymouse version. EDIT: the earlier algorithm was rather goofy; a simpler and more effective was found
- start with a static dictionary of '0', g', '1' (where 'g' signifies an intermediate value)
- split the ternary data for each glyph into a list of trits
- find the most common consecutive combination of items (it will most probably be '0', '0' at the first iteration)
- replace all occurrences of the combination with the combination and add the combination into the dictionary (e.g., data '0', '1', '0', '0', 'g' will become '0', '1', '00', 'g' if '0', '0' is replaced by '00')
- remove any unused items in the dictionary (they may occur at least in theory)
- repeat steps 3-5 until the dictionary is full (i.e. at least 253 rounds)
This is still a very simplistic approach and it probably gives a very sub-optimal result. Its only merit is that it works.
How well does it work?
One answer is well enough, but to elaborate that a bit, here are some numbers. This is a font with 864 glyphs, typical glyph size of 14x11 pixels, and 8 bits per pixel.
- raw uncompressed size: 127101
- number of intermediate values: 46697
- Shannon entropies (octet-by-octet):
- total: 528914 bits = 66115 octets
- ternary data: 176405 bits = 22051 octets
- intermediate values: 352509 bits = 44064 octets
- simply compressed ternary data (0=0, 10=1, 11=intermediate) (127101 trits): 207505 bits = 25939 octets
- dictionary compressed ternary data: 18492 octets
- entropy: 136778 bits = 17097 octets
- dictionary size: 647 octets
- full compressed data: 647 + 18492 + 46697 = 65836 octets
- compression: 48.2 %
The comparison with octet-by-octet entropy is quite revealing. The intermediate value data has high entropy, whereas the ternary data can be compressed. This can also be interpreted by the high number of values 0 and 255 in the raw data (as compared to any intermediate values).
We do not do anything to compress the intermediate values, as there do not seem to be any meaningful patterns. However, we beat entropy by a clear margin with ternary data, and even the total amount of data is below entropy limit. So, we could do worse.
Reducing the number of quantization levels to 17 would reduce the data size to approximately 42920 octets (compression over 66 %). The entropy is then 41717 octets, so the algorithm gets slightly worse as is expected.
In practice, smaller font sizes are difficult to compress. This should be no surprise, as larger fraction of the information is in the gray scale information. Very big font sizes compress efficiently with this algorithm, but there run-length compression is a much better candidate.
What would be better?
If I knew, I would use it! But I can still speculate.
Jubatian
suggests there would be a lot of repetition in a font. This must be true with the diacritics, as aàäáâå have a lot in common in almost all fonts. However, it does not seem to be true with letters such as p and b in most fonts. While the basic shape is close, it is not enough. (Careful pixel-by-pixel typeface design is then another story.)
Unfortunately, this inevitable repetition is not very easy to exploit in smaller size fonts. I tried creating a dictionary of all possible scan lines and then only referencing to those. Unfortunately, the number of different scan lines is high, so that the overhead added by the references outweighs the benefits. The situation changes somewhat if the scan lines themselves can be compressed, but there the small number of octets per scan line makes efficient compression difficult. This problem is, of course, dependent on the font size.
My intuition tells me that this would still be the right way to go, if both longer and shorter runs than full scan lines are used. This combined with using 4-bit pixels would probably give very good results—only if there were a way to create that optimal dictionary.
One hint to this direction is that LZMA2 compressed file (with xz
at the highest compression) of the complete font data (127101 octets) is only 36720 octets. Of course, this format fulfils none of the other requirements (fast to decompress, can be decompressed glyph-by-glyph, low RAM requirements), but it still shows there is more redundance in the data than what my cheap algorithm has been able to exploit.
Dictionary coding is typically combined with Huffman or arithmetic coding after the dictionary step. We cannot do it here, but if we could, it would save another 4000 octets.