0

If I use getenv() to disable some verifications of my program (for instance license checking) will a hacker be able to discover easily the concerned environment variable (using strace or other ?)

Exemple of code:

if (! getenv("my_secret_env_variable")) checkLicense();

(If, on the other hand, I checked the presence of a specific file, the hacker would see it immediately with strace)

hmjd
  • 120,187
  • 20
  • 207
  • 252
user803422
  • 2,636
  • 2
  • 18
  • 36
  • Just one question - If you are developing the code (or your programmers) - why not just give yourself a license? – Ed Heal Jul 06 '12 at 14:52
  • The license example is just an example. The real thing is: I want to disable a watchdog. – user803422 Jul 06 '12 at 15:49
  • @user803422: ... then much to learn you have, young padawan :) – 0xC0000022L Jul 06 '12 at 15:52
  • There is SO MUCH difference between able to turn on debugging etc through an environment variable (I.e. not security risk if careful) or that anybody can make it easier to nick your salary. – Ed Heal Jul 06 '12 at 15:56

5 Answers5

4

Let me add to the existing answers to give you a bit of a broader view about software protection.

Hackers won't just use strace, they'll use whatever tools they have in their tool chest but in order of increasing complexity, perhaps merely starting with something as simple as strings in most cases. Most hackers I know are inherently lazy and will therefore take the path of least resistance. (NB: by hacker I mean a technically very skilled person, not a cracker - the latter often has the same skill set, but a different set of ethics).

Generally speaking from the perspective of the reverse engineer, just about anything can be "cracked" or worked around. The question is how much time and/or determination the attacker has. Consider that some student may do this just for giggles, while some "release groups" do this for fame within their "scene".

Let's consider hardware dongles for example. Most software authors/companies think that they somehow magically "buy" security when licensing some dongle. However, if they aren't careful with the implementation of the system it is as simple to work around as your attempt. Even when they are careful enough, it is often still possible to emulate a dongle although it will require some skill to extract the information on the dongle. Some dongles (I will not conceal that fact from you) are therefore "smart", meaning they contain a CPU or even a full-fledged embedded system. If vital parts of a software product are executed on the dongle and all that enters the dongle is the input and all that leaves the dongle is the output, that makes for a pretty good protection. However, it will equally annoy honest customers and attackers for the most part.

Or let's consider encryption as another example. Many developers don't seem to grasp the concept of a public and a secret key and think that "hiding" the secret key inside the code makes it somehow safer. Well, it doesn't. The code contains the algorithm and the secret key now, how convenient is that for the attacker?

The general problem in most of these cases is that on one hand you trust the users (because you sell to them), but on the other hand you don't trust them (because you try to protect your software somehow). When you look at it this way you can see how futile it actually is. Most of the time you will disgruntle the honest customers, while only delaying the attacker a little (software protection is binary: either it gives protection or it doesn't, i.e. it's already cracked).

Consider instead the path the makers of IDA Pro took. They watermark all their binaries before the user gets them. Then, in case those binaries get leaked, legal measures can be taken. And even without taking legal measures into account they can shame (and have shamed) those that leaked their product publicly. Also, if you are responsible for a leak you won't be sold any upgrades to the software and the makers of IDA will not do business with your employer. That's quite an incentive to keep your copy of IDA safe. Now, I get it, IDA is somewhat of a niche product, but still the approach is fundamentally different and doesn't have the same issues as the conventional attempts at protecting software.

Another alternative is of course to offer a service rather than a software. So you could give the user a token that the software sends to your server. The server then offers an update (or whatever the service) based on decoding the token (which we assume to be an encrypted message) and checking validity. In this case the user would only receive the token but never the secret key to decode it, which your server on the other hand would have to validate the token. Call it product key or whatever, there are dozens of ways one can imagine. The point is that you don't end up in that contradiction of trusting and mistrusting the user at the same time. You only mistrust the user and can for example blacklist her token if that has been abused.

0xC0000022L
  • 20,597
  • 9
  • 86
  • 152
  • 1
    That is a very good answer and critique. I have always thought that. The best you can do is to make it difficult and not worthwhile in cracking the code. To overcome this either offer a service (provide access to a database etc), or bung in a few bugs and then correct them in the next release. Just a cynical old f**t. – Ed Heal Jul 06 '12 at 15:22
  • @airza: I wrote "just about anything". I'm rather cautious because the "service" I describe cannot be cracked without compromising the server. The same goes for the "smart" dongle I described unless you have resources and skills that go well beyond what you usually are up against. In this case we're talking the kind of difference such as the average banking trojan vs. Stuxnet, Duqu or Flame :) – 0xC0000022L Jul 06 '12 at 16:05
2

Yes. Any hardcoded string is trivially easy to discover inside of a compiled binary. The library call will also be easy to see. It's also possible to change the string inside of the binary to something else.

argentage
  • 2,758
  • 1
  • 19
  • 28
2

We also can LD_PRELOAD the getenv function to display the parameter receive

Damien Locque
  • 1,810
  • 2
  • 20
  • 42
1

Yes - Easy. They just use strings to find out what to try.

Ed Heal
  • 59,252
  • 17
  • 87
  • 127
1

the hacker would see it immediately with strace - maybe you should take a look at ltrace as well?

While you might not be able to hide the variable name, why not require a value? In particular you could use an integer for a valid value (atoi) since they are a lot harder to spot in code, or even a combination of ints and single chars. However remember that the environment block is an easy part of memory to find, especially in a core dump.

cdarke
  • 42,728
  • 8
  • 80
  • 84
  • The OP just uses `getenv` for its existence. Assuming that this is not true then either it is a string comparison or (as you say) conversion into an int. Former - easy, latter the next few lines of code will contain a switch or if statement. – Ed Heal Jul 06 '12 at 15:29
  • @Ed Heal: perhaps I should have said that integers are more difficult to spot in *compiled* code. – cdarke Jul 07 '12 at 15:09