35

In honor of the Hutter Prize, what are the top algorithms (and a quick description of each) for text compression?

Note: The intent of this question is to get a description of compression algorithms, not of compression programs.

Brian R. Bondy
  • 339,232
  • 124
  • 596
  • 636
  • 3
    I saw once an (mock) article proposing a lossy compression of text, with excellent performance (in size!)... Was funny. – PhiLho Oct 25 '08 at 14:18
  • 1
    @PhiLho heh, that's essentially what Summly did :) http://www.theregister.co.uk/2013/03/25/yahoo_buys_summly/ – John Carter May 04 '13 at 21:38

3 Answers3

33

The boundary-pushing compressors combine algorithms for insane results. Common algorithms include:

  • The Burrows-Wheeler Transform and here - shuffle characters (or other bit blocks) with a predictable algorithm to increase repeated blocks which makes the source easier to compress. Decompression occurs as normal and the result is un-shuffled with the reverse transform. Note: BWT alone doesn't actually compress anything. It just makes the source easier to compress.
  • Prediction by Partial Matching (PPM) - an evolution of arithmetic coding where the prediction model(context) is created by crunching statistics about the source versus using static probabilities. Even though its roots are in arithmetic coding, the result can be represented with Huffman encoding or a dictionary as well as arithmetic coding.
  • Context Mixing - Arithmetic coding uses a static context for prediction, PPM dynamically chooses a single context, Context Mixing uses many contexts and weighs their results. PAQ uses context mixing. Here's a high-level overview.
  • Dynamic Markov Compression - related to PPM but uses bit-level contexts versus byte or longer.
  • In addition, the Hutter prize contestants may replace common text with small-byte entries from external dictionaries and differentiate upper and lower case text with a special symbol versus using two distinct entries. That's why they're so good at compressing text (especially ASCII text) and not as valuable for general compression.

Maximum Compression is a pretty cool text and general compression benchmark site. Matt Mahoney publishes another benchmark. Mahoney's may be of particular interest because it lists the primary algorithm used per entry.

ComFreek
  • 29,044
  • 18
  • 104
  • 156
Corbin March
  • 25,526
  • 6
  • 73
  • 100
  • 2
    Are there algorithms that compress text and give me back text (not binary)? – CMCDragonkai Apr 09 '15 at 07:50
  • 2
    Pied Piper compression was not discovered when this list was created! – flakerimi May 13 '20 at 21:21
  • I wonder how use of gpt3 AI in prediction of next token part might improve on the best results until now . Can it beat existing best results so far . Let’s assume we don’t care for speed and want max compression, like for long term text archival purposes – sktguha Aug 28 '20 at 14:51
8

There's always lzip.

All kidding aside:

  • Where compatibility is a concern, PKZIP (DEFLATE algorithm) still wins.
  • bzip2 is the best compromise between being enjoying a relatively broad install base and a rather good compression ratio, but requires a separate archiver.
  • 7-Zip (LZMA algorithm) compresses very well and is available for under the LGPL. Few operating systems ship with built-in support, however.
  • rzip is a variant of bzip2 that in my opinion deserves more attention. It could be particularly interesting for huge log files that need long-term archiving. It also requires a separate archiver.
Sören Kuklau
  • 19,454
  • 7
  • 52
  • 86
  • 5
    These come no where near PAQ and several other text-only compression algorithms (http://en.wikipedia.org/wiki/PAQ) – Brian R. Bondy Oct 25 '08 at 14:47
  • 1
    @BrianR.Bondy: you're right, `zpaq` compressed an order of magnitude smaller than PKZIP. See below (yes, it's a tool, but some people come here looking for exactly that) – serv-inc Jan 21 '18 at 10:54
4

If you want to use PAQ as a program, you can install the zpaq package on debian-based systems. Usage is (see also man zpaq)

zpaq c archivename.zpaq file1 file2 file3

Compression was to about 1/10th of a zip file's size. (1.9M vs 15M)

serv-inc
  • 35,772
  • 9
  • 166
  • 188