I am not aware of a canned solution for your problem, so you may need to write it yourself.
It certainly can be done. I would use the tar format, since it is simple and well understood, but it would require an auxiliary file with index information into the compressed archive. What you would do would be to control the compression of the tar file to create entry points that require no history. Those entry points would need to be much farther apart than 1K to enable good compression, but they can be close enough together to provide relatively fast random access to the 1K files.
The simplest approach would be to use gzip to separately compress chunks of the tar file representing sets of complete files that together are around 128K bytes. The gzip streams can be simply concatenated, and the resulting .tar.gz file would work normally with tar utilities. It is a property of the gzip format that valid gzip streams concatenated are also valid gzip streams.
The auxiliary file would contain a list of the files in the tar archive, their size and offset in the uncompressed tar file, and then separately the compressed and uncompressed offsets of each gzip stream starting point. Then when extracting a file you would look for its offset in the uncompressed tar file, find the gzip stream with the largest uncompressed offset less than or equal to that file's offset, and then start decompressing from the corresponding compressed offset until you get to that file.
For this example, on average you would only need to decompress 64K to get to any given file in the archive.