1

Suppose I have a program that decrypts a file and stores the decrypted contents on the heap. I want to protect this information from other (non-root) processes running on the same system, so before I call free() to release the heap allocation, I'm using memset() to overwrite the data and make it unavailable to the next process that uses the same physical memory. (I understand this isn't a concern on some systems, but would prefer to err on the side of safety.)

However, I'm not sure what to do in cases where the program doesn't terminate normally, either through a forced termination (SIGINT, SIGTERM, etc.) or due to an error condition (SIGSEGV, SIGBUS, etc.). Should I just trap as many signals as possible to clear the heap before exiting, or is there a more orderly way of doing things?

Community
  • 1
  • 1
r3mainer
  • 23,981
  • 3
  • 51
  • 88

1 Answers1

3

An operating system that leaks contents of memory between processes (especially with different privileges) would be so broken from a security point of view that you doing it yourself won't change anything. Especially since on most operating systems the memory pages that you write to can at any point be taken away from you, swapped out and given to someone else. So I can safely say that you don't need to worry about normal termination unless you're on an operating system so specialized that it doesn't have anyone to leak the memory to. Also, there are certain ways to kill your process without you having any ability to catch the killing signal, so you couldn't handle all the cases anyway.

When it comes to abnormal termination (SIGSEGV, etc.) your best bet is to either disable dumping cores or at least make sure that your core dumps are only readable by you. That should be the main worry, the physical memory won't leak, but your core dumps could be readable by someone else.

That being said, it's still a very good practice to wipe secrets from memory as soon as you don't need them anymore. Not because they can leak to others through normal operation, because they can't, but because they can leak out through bugs. You might have an exploitable bug, maybe you get a stray pointer you'll write to a log, maybe you'll leave your key on the stack and then forget to initialize your data, etc. So your main worry shouldn't be to wipe out secrets from memory before exit, but to actually identify the point in your code where you don't need a secret anymore and wipe it right then and there.

Unfortunately, using memset that you mentioned is not enough. Many compilers today are smart enough to understand that some of your calls to memset are dead stores and optimize them away (like memset of a stack buffer just before leaving a function or just before free). See this issue in LibreSSL for a discussion about it, and this implementation of explicit_bzero for the currently best known attempt to work around it on clang and gcc.

Art
  • 19,807
  • 1
  • 34
  • 60
  • Thanks! So it's quite safe to assume that the operating system will *always* clear the contents of memory before allocating it to a different process? (This seems to be the way OS X, Debian and Ubuntu behave, but I couldn't find any definitive references.) – r3mainer Jul 15 '15 at 13:07
  • I've had a discussion about it a few times and the only conclusion is "this is so obvious (to OS developers) that it isn't even documented anywhere". POSIX does not have any way of getting anonymous memory from the operating system other than `malloc`&co with the usual semantics, so it is of no help. But to provide safe separation of processes memory has to be wiped out before being given to someone else. There is no other way to do it. Could a standards lawyer make an OS that doesn't? Sure, but I doubt it would be used. I'm not aware of anyone doing anything different today. – Art Jul 15 '15 at 14:13
  • Well perhaps this is all in the grey area between the POSIX and ANSII standards, but I wish they could make a bit more effort to explain themselves on matters like this. Optimizing away a call to `memset()` before a call to `free()` is just plain stupid. Doing it silently and without warning is *criminally* stupid. – r3mainer Jul 15 '15 at 15:22