3

So, to start off, I want to point out that I know that these things are never fool-proof and if enough effort is applied anything can be broken.

But: Say I hand a piece of software to someone (that I have written) and get them to run it. I want to verify the result that they get. I was thinking of using some sort of encryption/hash that I can use to verify that they've run it and obtained a satisfactory result.

I also don't want the result to be "fakeable" (though again, I know that if enough effort to break it is applied etc etc...). This means therefore, that if I use a hash, I can't just have a hash for "yes" and a hash for "no" (as this means the hash is going to be only one of 2 options - easily fakeable).

I want the user of the tool to hand something back to me (in possibly an email for example), something as small as possible (so for example, I don't want to be trawling through lines and lines of logs).

How would you go about implementing this? I possibly haven't explained things the greatest, but hopefully you get the gist of what I want to do.

If anyone has implemented this sort of thing before, any pointers would be much appreciated.

This question is more about "how to implement" rather than specifically asking about code, so if I've missed an important tag please feel free to edit!

Piskvor left the building
  • 91,498
  • 46
  • 177
  • 222
AndrewC
  • 6,680
  • 13
  • 43
  • 71
  • 1
    What is to stop a hostile user from disassembling your program, editing it to produce whatever result they want, and then your encryption logic sends you the encrypted chosen plaintext? If your premise is that *the client is unreliable* then do not write a security system that *relies upon data from the client*. – Eric Lippert Jul 01 '10 at 14:07
  • 2
    Also, note that software that "phones home" with encrypted information about the client may be illegal without the client's consent and knowledge of the message contents. I am not a lawyer; privacy laws vary greatly by country. You should probably consult an international privacy issues specialist before you go much further. – Eric Lippert Jul 01 '10 at 14:09
  • 1
    And finally, I would note that you have stated that you want to use encryption but haven't clearly stated *the threat*. Encryption is a technology for mitigating vulnerabilities to threats. *What precisely is the threat?* Is it eavesdropping, or hostile users, or what? If encryption doesn't actually mitigate the threat you have, don't use encryption; use some technique that *works*. – Eric Lippert Jul 01 '10 at 14:14
  • Hi Eric. I should have pointed out that the tool isn't going out publically, it will go to a select list of trusted users. The tool being disassembled isn't an issue. I'm not really bothered about encryption, all I need to do is be able to verify that they ran a specific process and got a legitimate result. The tool verifies stuff, so I don't want them to just assume that something works fine and not run the tool. – AndrewC Jul 01 '10 at 20:01
  • Regarding phoning home, it doesn't necessarily have to do that, I might not want to check the result EVERY time its run (but rather every so often). The users may email it to me when I ask for it (its a work environment). The data being sent won't be sensitive (obviously), so whilst I mentioned encryption perhaps I shouldn't have, I really was just writing down everything that was going through my head that might be remotely helpful in order to gain as much critique and ideas as possible (like what you're giving now) :) – AndrewC Jul 01 '10 at 20:04

6 Answers6

3

I think what you're looking for is non-repudiation. You're right, a hash won't suffice here - you'd have to look into some kind of encryption and digital signature on the "work done", probably PKI. This is a pretty wide field, I'd say you'll need both authentication and integrity verification (e.g. Piskvor did that, and he did it this way at that time).

To take a bird's eye view, the main flow would be something like this:

On user's computer:

  • run process
  • get result, add timestamp etc.
  • encrypt, using your public key
  • sign, using the user's private key (you may need some way to identify the user here - passphrases, smart cards, biometrics, ...)
  • send to your server

On your server:

  • verify signature using the user's public key
  • decrypt using your private key
  • process as needed

Of course, this gets you into the complicated and wonderful world that is Public Key Infrastructure; but done correctly, you'll have a rather good assurance that the events actually happened the way your logs show.

Piskvor left the building
  • 91,498
  • 46
  • 177
  • 222
3

I'm pasting in one of your comments here, because it goes to the heart of the matter:

Hi Eric. I should have pointed out that the tool isn't going out publically, it will go to a select list of trusted users. The tool being disassembled isn't an issue. I'm not really bothered about encryption, all I need to do is be able to verify that they ran a specific process and got a legitimate result. The tool verifies stuff, so I don't want them to just assume that something works fine and not run the tool.

So basically, the threat we're protecting against is lazy users, who will fail to run the process and simply say "Yes Andy, I ran it!". This isn't too hard to solve, because it means we don't need a cryptographically unbreakable system (which is lucky, because that isn't possible in this case, anyway!) - all we need is a system where breaking it is more effort for the user than just following the rules and running the process.

The easiest way to do this is to take a couple of items that aren't constant and are easy for you to verify, and hash them. For example, your response message could be:

  • System Date / Time
  • Hostname
  • Username
  • Test Result
  • HASH(System Date / Time | Hostname | Username | Test Result)

Again, this isn't cryptographically secure - anyone who knows the algorithm can fake the answer - but as long as doing so is more trouble than actually running the process, you should be fine. The inclusion of the system date/time protects against a naive replay attack (just sending the same answer as last time), which should be enough.

caf
  • 233,326
  • 40
  • 323
  • 462
2

How about you take the output of your program (either "yes" or "no"?), and concatenate it with a random number, then include the hash of that string?

So you end up with the user sending you something like:

YES-3456234
b23603f87c54800bef63746c34aa9195

This means there will be plenty of unique hashes, despite only two possible outputs.

Then you can verify that md5("YES-3456234") == "b23603f87c54800bef63746c34aa9195".

If the user is not technical enough to figure out how to generate an md5 hash, this should be enough.

A slightly better solution would be concatenate another (hard-coded, "secret") salt in order to generate the hash, but leave this salt out of the output.

Now you have:

YES-3456234
01428dd9267d485e8f5440ab5d6b75bd

And you can verify that

md5("YES-3456234" + "secretsalt") == "01428dd9267d485e8f5440ab5d6b75bd"

This means that even if the user is clever enough to generate his own md5 hash, he can't fake the output without knowing the secret salt as well.

Of course, if he is clever enough, he can extract the salt from your program.

If something more bullet-proof is needed, then you're looking at proper cryptographic signature generation, and I'll just refer you to Piskvor's answer, since I have nothing useful to add to that :)

Community
  • 1
  • 1
Blorgbeard
  • 101,031
  • 48
  • 228
  • 272
1

In theory this is possible by using some sort of private salt and a hashing algorithm, basically a digital signature. The program has a private salt that it adds to the input before hashing. Private means the user does not have access to it, you however do know the salt.

The user sends you his result and the signature generated by the program. So you can now confirm the result by checking if hash(result + private_salt) == signature. If it is not, the result is forged.

In practice this is almost impossible, because you cannot hide the salt from the user. It's basically the same problem that is discussed in this question: How do you hide secret keys in code?

Community
  • 1
  • 1
igorw
  • 27,759
  • 5
  • 78
  • 90
  • Also, there will only be as many unique hashes generated as there are possible outputs - one for YES and one for NO, as in the OP's question. – Blorgbeard Jul 01 '10 at 12:28
  • True, you would have to add another server-provided random salt ala CRAM to fix that. – igorw Jul 01 '10 at 13:06
1

You could make the application a web app to which they have no source code access or access to the server on which it runs. Then your application can log its activity, and you can trust those logs.

Once a user has an executable program in their hands, then anything can be faked.

Jeffrey L Whitledge
  • 58,241
  • 9
  • 71
  • 99
  • Sorry, I probably should have said but the amount of effort it would take to convert to a web app would be prohibitive (as the app has been written already and what I'm looking would be implemented as an update). Thanks for the answer though! – AndrewC Jul 01 '10 at 14:53
1

It's worth noting that you aren't really looking for encryption.

The "non-repudiation" answer is almost on the money, but all you really need to guarantee where your message has come from is to securely sign the message. It doesn't matter if someone can intercept and read your message, as long as they can't tamper with it without you knowing.

I've implemented something similar before information was sent plaintext - because it wasn't confidential - but an obfuscated signing mechanism meant that we could be (reasonably) confident that the message was generated by our client software.

Note that you can basically never guarantee security if the app is on someone else's hardware - but security is never about "certainty", it's about "confidence" - are you confident enough for your business needs that the message hasn't been tampered with?

Dan Puzey
  • 33,626
  • 4
  • 73
  • 96
  • Yes, this is exactly what I want. I don't mind the information being read, just so long as it cannot be forged or changed without me knowing about it. – AndrewC Jul 01 '10 at 18:20
  • Then all you need is message signing. The difficulty is in obfuscating the client code to stop people being able to use it to sign their own message (which is never 100% doable, but you can make it complicated enough to be a reasonable deterrent). – Dan Puzey Jul 02 '10 at 09:02