152

I'm searching for an algorithm to compress small text strings: 50-1000 bytes (i.e. URLs). Which algorithm works best for this?

Victor Sergienko
  • 13,115
  • 3
  • 57
  • 91
Vasily Korolev
  • 1,781
  • 2
  • 11
  • 9
  • 2
    Where do you want to use these compressed strings? – Gumbo Jul 16 '09 at 15:17
  • 1
    Is this going towards `tinyurls` or something to do with storage space? – nik Jul 16 '09 at 15:21
  • No, tinyurls is not the anwser here. – Vasily Korolev Jul 16 '09 at 15:23
  • Would you care to elaborate `basilio`? What do you want to get or, what is this compression targeted towards? – nik Jul 16 '09 at 15:33
  • 7
    I'm interested in an algorithm for compressing URLs, best compression ratio is more important then running cost. Not interested in online services like tinyurls or tr.im. I'm looking for an algorithm not a service. Don't think any other info could be useful... – Vasily Korolev Jul 16 '09 at 15:40
  • 3
    @Gumbo: "Text compression algorithms for short strings" is enough for finding algos, why are you so interested in knowing what they are for? I'm sure the OP will be able to find the one that does what he wants. – Dervin Thunk Jul 16 '09 at 16:51
  • I have a similar problem to the OP - I am storing application state information in the location hash and would like to compress it to shorten it. – Roy Tinker Oct 27 '10 at 18:41
  • Is this a duplicate of ["Compression of ASCII strings in C"](http://stackoverflow.com/questions/1097197/compression-of-ascii-strings-in-c)? – David Cary Jul 29 '10 at 19:40
  • 7
    @Vasily, a small hint: Whenever you're asking a question on SO in the form of, "What is *the best* XYZ?", your question is almost bound to receive votes for closing because asking for the best *might* lead to unnecessary product comparisons, or in the worst case, even flame wars. (It usually takes only a very small change to avoid that: If you asked the same question like, "Please suggest a XYZ.", you wouldn't get as many closing votes, even though it's basically the same question!) – stakx - no longer contributing Sep 16 '11 at 20:17
  • 1
    ~50% compression for URLs - see http://blog.alivate.com.au/packed-url/ – Kind Contributor Jun 07 '18 at 23:58

7 Answers7

76

Check out Smaz:

Smaz is a simple compression library suitable for compressing very short strings.

rink.attendant.6
  • 44,500
  • 61
  • 101
  • 156
stvchu
  • 876
  • 6
  • 3
  • 18
    See http://github.com/antirez/smaz/blob/master/smaz.c -- this is a variant of coding, not compression per se (at least not entirely). He uses a static word and letter dictionary. – Roy Tinker Oct 27 '10 at 18:46
  • 7
    Note: This is antirez's project. He's one of the principal authors of Redis and has a very strong reputation of releasing high quality, production code. – Homer6 Mar 05 '14 at 23:54
  • 7
    The smaz algorithm is optimized for English texts, therefore does not work well for random strings. Here are some samples (`string:orig_size:compr_size:space_savings`): `This is the very end of it.:27:13:52%`, `Lorem ipsum dolor sit amet:26:19:27%`, `Llanfairpwllgwyngyll:20:17:15%`, `aaaaaaaaaaaaa:13:13:0%`, `2BTWm6WcK9AqTU:14:20:-43%`, `XXX:3:5:-67%` – mykhal Mar 23 '14 at 11:41
  • 4
    Also take a look at a lower compression but a fast algorithm shoco http://ed-von-schleck.github.io/shoco – Dickey Singh Nov 07 '14 at 02:54
  • Add my library Unishox to the list https://github.com/siara-cc/unishox. It performs better than Smaz and Shoco and supports compressing UTF-8 strings. – Arundale Ramanathan Feb 15 '20 at 12:29
31

Huffman has a static cost, the Huffman table, so I disagree it's a good choice.

There are adaptative versions which do away with this, but the compression rate may suffer. Actually, the question you should ask is "what algorithm to compress text strings with these characteristics". For instance, if long repetitions are expected, simple Run-Lengh Encoding might be enough. If you can guarantee that only English words, spaces, punctiation and the occasional digits will be present, then Huffman with a pre-defined Huffman table might yield good results.

Generally, algorithms of the Lempel-Ziv family have very good compression and performance, and libraries for them abound. I'd go with that.

With the information that what's being compressed are URLs, then I'd suggest that, before compressing (with whatever algorithm is easily available), you CODIFY them. URLs follow well-defined patterns, and some parts of it are highly predictable. By making use of this knowledge, you can codify the URLs into something smaller to begin with, and ideas behind Huffman encoding can help you here.

For example, translating the URL into a bit stream, you could replace "http" with the bit 1, and anything else with the bit "0" followed by the actual procotol (or use a table to get other common protocols, like https, ftp, file). The "://" can be dropped altogether, as long as you can mark the end of the protocol. Etc. Go read about URL format, and think on how they can be codified to take less space.

Daniel C. Sobral
  • 295,120
  • 86
  • 501
  • 681
  • 4
    Not if the huffman table is the same for all files, which would make sense if the files are all similar to each other. – finnw Jul 16 '09 at 15:45
  • 1
    If you have many, similar, small files, you are doing it all wrong. First, concatenate them all (like tar does), and then compress that. You'll get better compression, and the problem ceases to be "50-1000 bytes". – Daniel C. Sobral Jul 16 '09 at 15:51
  • 9
    @Daniel: depends whether you want random access to the compressed data. Compressing it all together prevents that with most compression systems. – Steve Jessop Jul 16 '09 at 16:19
24

I don't have code to hand, but I always liked the approach of building a 2D lookup table of size 256 * 256 chars (RFC 1978, PPP Predictor Compression Protocol). To compress a string you loop over each char and use the lookup table to get the 'predicted' next char using the current and previous char as indexes into the table. If there is a match you write a single 1 bit, otherwise write a 0, the char and update the lookup table with the current char. This approach basically maintains a dynamic (and crude) lookup table of the most probable next character in the data stream.

You can start with a zeroed lookup table, but obviosuly it works best on very short strings if it is initialised with the most likely character for each character pair, for example, for the English language. So long as the initial lookup table is the same for compression and decompression you don't need to emit it into the compressed data.

This algorithm doesn't give a brilliant compression ratio, but it is incredibly frugal with memory and CPU resources and can also work on a continuous stream of data - the decompressor maintains its own copy of the lookup table as it decompresses, thus the lookup table adjusts to the type of data being compressed.

Community
  • 1
  • 1
redcalx
  • 8,177
  • 4
  • 56
  • 105
14

Any algorithm/library that supports a preset dictionary, e.g. zlib.

This way you can prime the compressor with the same kind of text that is likely to appear in the input. If the files are similar in some way (e.g. all URLs, all C programs, all StackOverflow posts, all ASCII-art drawings) then certain substrings will appear in most or all of the input files.

Every compression algorithm will save space if the same substring is repeated multiple times in one input file (e.g. "the" in English text or "int" in C code.)

But in the case of URLs certain strings (e.g. "http://www.", ".com", ".html", ".aspx" will typically appear once in each input file. So you need to share them between files somehow rather than having one compressed occurrence per file. Placing them in a preset dictionary will achieve this.

finnw
  • 47,861
  • 24
  • 143
  • 221
6

Huffman coding generally works okay for this.

Zifre
  • 26,504
  • 11
  • 85
  • 105
3

If you are talking about actually compressing the text not just shortening then Deflate/gzip (wrapper around gzip), zip work well for smaller files and text. Other algorithms are highly efficient for larger files like bzip2 etc.

Wikipedia has a list of compression times. (look for comparison of efficiency)

Name       | Text         | Binaries      | Raw images
-----------+--------------+---------------+-------------
7-zip      | 19% in 18.8s | 27% in  59.6s | 50% in 36.4s
bzip2      | 20% in  4.7s | 37% in  32.8s | 51% in 20.0s
rar (2.01) | 23% in 30.0s | 36% in 275.4s | 58% in 52.7s
advzip     | 24% in 21.1s | 37% in  70.6s | 57& in 41.6s
gzip       | 25% in  4.2s | 39% in  23.1s | 60% in  5.4s
zip        | 25% in  4.3s | 39% in  23.3s | 60% in  5.7s
Timo
  • 263
  • 3
  • 8
Ryan Christensen
  • 7,843
  • 1
  • 27
  • 25
  • 9
    He wants to compress text and not files. – Gumbo Jul 16 '09 at 15:29
  • 4
    You can compress text and binaries with these algorithms. In fact we use deflate within a cms system that runs in python. – Ryan Christensen Jul 16 '09 at 16:08
  • An example in C# using gzip for strings is here: http://www.csharphelp.com/archives4/archive689.html – Ryan Christensen Jul 16 '09 at 16:10
  • zlib module in python for compressing strings: http://www.python.org/doc/2.5.2/lib/module-zlib.html – Ryan Christensen Jul 16 '09 at 16:11
  • 4
    gzip (and zlib) uses deflate and adds wrapper/framing overhead.. direct deflate/LZ77 (dictionary overhead and efficiency still depends on implementation of such and settings) can reduce the break-even overhead. This is for "short" strings in the dozens to hundreds of characters, of course (still should have a bit to indicate "was this compressed"? to avoid enlarging data). Larger extra overhead doesn't matter.. as text increases. The numbers posted here appear to be for *large* text-files (many seconds to run!), while OP asks for 50-1000 charters - *very small* in comparison. – user2864740 Nov 03 '18 at 02:29
3

You might want to take a look at Standard Compression Scheme for Unicode.

SQL Server 2008 R2 use it internally and can achieve up to 50% compression.

Le Hibou
  • 1,541
  • 1
  • 10
  • 12
  • SCSU 'compresses' non-English Unicode in UTF-16/MB encodings. If English-based Unicode / plain-old-ASCII, UTF-8 also 'compresses' 50% of UTF-16.. – user2864740 Nov 03 '18 at 02:23