2

I need a reasonable supply of high-quality random data for an application I'm writing. Linux provides the /dev/random file for this purpose which is ideal; however, because my server is a single-service virtual machine, it has very limited sources of entropy, meaning /dev/random quickly becomes exhausted.

I've noticed that if I read from /dev/random, I will only get 16 or so random bytes before the device blocks while it waits for more entropy:

[duke@poopz ~]# hexdump /dev/random
0000000 f4d3 8e1e 447a e0e3 d937 a595 1df9 d6c5
<process blocks...>

If I terminate this process, go away for an hour and repeat the command, again only 16 or so bytes of random data are produced.

However - if instead I leave the command running for the same amount of time, much, much more random data are collected. I assume from this that over the course of a given timeperiod, the system produces plenty of entropy, but Linux only utilises it if you are actually reading from /dev/random, and discards it if you are not. If this is the case, my question is:

Is it possible to configure Linux to buffer /dev/random so that reading from it yields much larger bursts of high-quality random data?

It wouldn't be difficult for me to buffer /dev/random as part of my program but I feel doing this at a system level would be more elegant. I also wonder if having Linux buffer its random data in memory would have security implications.

Idris
  • 1,887
  • 1
  • 14
  • 9
  • AFAIK, entropy is not (entirely) discarded, when the entropy is high enough, the kernel takes 1 in 4096 input samples and mixes that into its entropy pool. – Hasturkun Apr 12 '11 at 14:07

3 Answers3

2

Sounds like you need an entropy deamon that feeds the entropy pool from other sources.

Keith
  • 42,110
  • 11
  • 57
  • 76
  • 1
    I've just installed that and it does seem to cause more rapid replenishment of /dev/random, which is good. However, /dev/random still exhibits the same behaviour: If I leave it for ten minutes and then read from it, only 16 or so bytes are produced. But if I leave it reading /dev/random constantly for the same period of time, many kilobytes of random data can be retrieved. Clearly, the system has a plentiful supply of randomness already; so wouldn't it be nice if Linux would save all that data up in a buffer for when I need it, rather than throwing it away? – Idris Apr 12 '11 at 13:15
  • Yes, I think that's how it works. I don't think they want to buffer a lot of data in the kernel. I suppose a deamon that reads it periodically and keeps a larger buffer would work. – Keith Apr 12 '11 at 13:29
  • This is the kind of buffering daemon I need :D But does it exist..? – Idris Apr 12 '11 at 14:33
1

Use /dev/urandom.

A counterpart to /dev/random is /dev/urandom ("unlocked"/non-blocking random source[4]) which reuses the internal pool to produce more pseudo-random bits. This means that the call will not block, but the output may contain less entropy than the corresponding read from /dev/random. While it is still intended as a pseudorandom number generator suitable for most cryptographic purposes, it is not recommended for the generation of long-term cryptographic keys.

Sjoerd
  • 74,049
  • 16
  • 131
  • 175
  • 1
    Yes, I'd look at /dev/urandom but decided against it as I need high-quality randomness. – Idris Apr 12 '11 at 10:44
1

Have you got, or can you buy, a Linux-compatible hardware random number generator? That could be a solution to your underlying problem. See http://www.linuxcertified.com/hw_random.html

Robin Green
  • 32,079
  • 16
  • 104
  • 187
  • Yes this would be ideal, I've seen a few USB entropy generator dongles, and I've heard you can also use the noise from an audio input device. However this particular system is just a cheap VM on the other side of the world. From my tests it does actually produce enough entropy for my application over time - but it does not collect it, so a buffer would seem like the cheapest solution. – Idris Apr 12 '11 at 10:59