3

I have a (planned) commercial program that is writing out a usage log. As part of their license, users will be required to submit the log back to the company every few weeks. How can I ensure the file has not been tampered with?

System is being written in C# and Winforms or WPF.

Update: Hmmm... quite surprised that How to make a file tamper proof? is considered an "exact duplicate" of What technique can protect a secret from a fully trusted user? Anyway, the tribe has spoken.

Craig Schwarze
  • 11,367
  • 15
  • 60
  • 80
  • 1
    ahem http://stackoverflow.com/questions/2150912/what-technique-can-protect-a-secret-from-a-fully-trusted-user/ very similar question from yourself. – Earlz Jan 27 '10 at 23:44
  • 1
    You already asked this question here: http://stackoverflow.com/questions/2150912/what-technique-can-protect-a-secret-from-a-fully-trusted-user . – Daniel Pryden Jan 27 '10 at 23:45
  • Daniel, that was a broader question than this one. And this is not about making the contents secret - it's simply about protecting them from tampering. – Craig Schwarze Jan 27 '10 at 23:49
  • 3
    What's so important in the log file that tampering is of concern? If it truly is a usage log, I would rethink storing that on the user system at all. Build a web service that your application calls when the program starts/stops to report its usage. Real simple; harder to tamper with; more immediate than "every few weeks." – Mike Chess Jan 28 '10 at 00:02
  • 1
    Why not make the jump to WPF? – ChaosPandion Jan 28 '10 at 00:05
  • What are you going to do about it if the log has been tampered with? What if they claim they don't know what you're talking about? Can you prove it (No)? Just wondering because if you're considering suing your customers or some sh!@ like that, you might want to reconsider what your problem is. – noctonura Jan 28 '10 at 00:08
  • 1
    You're going to *require* users who've *paid* you for your program to let you know how they are using it? I understand your desire to know what's going on, but... This'd get a "no-buy" at my company. – DaveE Jan 28 '10 at 00:11
  • 1
    Well sometimes people charge per-client(as in, your client's clients), so I believe his concern is legitimate, just not exactly possible while being 100% secure. – Earlz Jan 28 '10 at 00:15
  • @CraigS: Would you, honestly, buy something like that yourself? It's a ridiculousy restriction, you'd have to pay most people to use it. – jalf Jan 28 '10 at 00:15
  • You might claim that this is a different question, but Eric Lippert's answer is still correct. – Anon. Jan 28 '10 at 00:21
  • 4
    Asking the same question three different ways doesn't change the answer. Stop trying to make the audit log tamperproof, because that's impossible. Rather, confirm that the audit logs are correct by comparing them with known transactions on the broker's server. This requires the cooperation of a non-hostile broker. Then *prosecute for fraud* anyone who submits a fraudulent audit log. – Eric Lippert Jan 28 '10 at 00:30
  • Well, the idea is to charge for usage rather than an upfront license. It might not be workable. – Craig Schwarze Jan 28 '10 at 00:31
  • 2
    Now, you *do* have to make the contents of the log encrypted, not so that the user cannot tamper with it, but so that an attacker *attacking the user* cannot get access to the user's private financial data. That is a whole other section of your threat model that you're going to have to think hard about. – Eric Lippert Jan 28 '10 at 00:31
  • True - are you available for contract, Eric? ;-) – Craig Schwarze Jan 28 '10 at 00:36
  • 2
    If it is to charge for usage then your best bet would be designing your application to communicate directly with some web service you control. If it can't establish that communication then no usage is allowed. But, as a user, I'm not likely to pay for software that charges me by the minute (or hour or byte). – Mike Chess Jan 28 '10 at 01:01
  • I am not an expert on securing financial data; I'm an expert on programming languages. But since I did so much work on the analysis of security flaws in JScript, I've had to pick up rather a lot of theory and practice on design of secure software. Now, *secure* software is hard -- that is, software that does some task and is hard for attackers to misuse. *Security* software is not *secure* software; writing systems specifically designed to protect financial assets against determined criminals is best left to professional experts. – Eric Lippert Jan 28 '10 at 06:06

8 Answers8

8

Isn't this just another case of the DRM problem, i.e., you're giving users a key and a lock and trying to make sure they don't use it in a way you don't want?

Even if you could guarantee the log file hasn't been touched, how are you going to guarantee the binary that generated it hasn't been touched? There's no end to this. If your code is running on a computer entirely under their control, you're SOL.

(The good news is that you probably don't care. Just put it in the contract. Small users won't want to put up with submitting a log file. Big businesses won't want to run afoul of a contract because there's a lot of money at stake.)

Ken
  • 1,066
  • 1
  • 10
  • 17
4

You can have the application produce a digital signature for the file, but this means the application must contain the private key somehow, and thus the user could potentially hack the application such that they can circumvent this system.

The fact is the user has full control over their system, and you can't prevent them from doing particular things on their system. I would suggest you invest more effort into adding useful features to your software that increases the level of appreciation for your software, such that users are less likely to pirate the software.

AaronLS
  • 37,329
  • 20
  • 143
  • 202
4

The only way I see still is making the application contact a remote server. Encrypting a file will be pointless because the attacker has the program to create the encrypted file.

So why not make the application contact a remote server for every time it needs to log something? (and if the server can't be contacted, the application bail out). The program can also use some basic key verification also because well...

Say the program tells the server "I'm doing stuff" and the server sends back "Ok" with a signed PGP for verification. In this case, the private key would be secure and kept on the server and the public key would be on the client and ensure that the server isn't being spoofed by the client's network.

I assume you are trying to log usage of your program or something like that.

Earlz
  • 62,085
  • 98
  • 303
  • 499
  • 1
    This would probably be more difficult to crack than the other suggestions, but it can still be cracked by disabling the code that performs the verification against the server. – AaronLS Jan 28 '10 at 00:08
  • 1
    Of course so, even more so with it being compiled to IL, but still.. idk.. maybe if it actually relied on the result from the server to direct it's next action.... Here though we are getting into the realm of impossible to prevent attacks.. – Earlz Jan 28 '10 at 00:11
  • 1
    Yeh, LOL even the big boys like Adobe have yet to make an uncrackable licensing system. – AaronLS Jan 28 '10 at 00:19
  • 1
    PDF "licensing" does not count as an attempt at encryption. They set one single bit in the file telling PDF readers "Whatever you do, don't show this plain text to the user. Prompt them for the password firsT" – Earlz Jan 28 '10 at 00:26
  • 1
    Yea, but not if you combine public and private keys where the private key is on the server and verified by the client with the public key. (along with a timestamp with the "OK" so that you can't just get the message and resend it a lot) – Earlz Jan 28 '10 at 00:29
  • 1
    Then I open the program in a hex editor and overwrite the offending public key with one I know the private key for, and use that to sign the "Ok"s. If it needs to talk to the server I can resign the data with the old key and forward it. – David X Jan 28 '10 at 00:33
  • 1
    hmm... Well played. lol. – Earlz Jan 28 '10 at 00:38
  • @earlz, thanks, I can keep this up all day. – David X Jan 28 '10 at 00:40
  • @Earlz I wasn't talking about PDF's at all BTW. – AaronLS May 14 '13 at 09:54
3

Slightly related, but look at my question Generating a Tamper Proof Signature of some data?

The consensus seems to be that you need an external source to generate a signature/secure timestamp that you can use to sign your data, and that you are mostly out of luck if the application is completely internal as the customer can just reverse engineer any checksum mechanism.

Of course, maybe an approach where you encrypt the data before writing to disk is "good enough" - depending on your customers and the importance of getting untampered logs.

Community
  • 1
  • 1
Michael Stum
  • 177,530
  • 117
  • 400
  • 535
2

You could put the file in a directory that the users do not have permission to view. Of course, if the user has full, unrestricted access, there is nothing you can do. Nothing. There will always be a way for them to get at the file (even if it's obtuse and awkward).

You could encrypt it but they could open it in a text editor and make a mess of it (even if they don't know what the contents are, adding a single character would ruin the file).

FrustratedWithFormsDesigner
  • 26,726
  • 31
  • 139
  • 202
  • 1
    And how are they going to submit the log back? ;) – Lucero Jan 27 '10 at 23:45
  • 3
    But the administrator will have access to it. This is his problem... – Earlz Jan 27 '10 at 23:45
  • 1
    @Lucero: Well, he could have some process run with elevated privileges and do the submission on the user's behalf. Or the sysadmin can do it manually. – FrustratedWithFormsDesigner Jan 27 '10 at 23:46
  • 1
    Local access = root access. As said before, if someone is determined enough he can attach the kernel debugger and go snooping everywhere in the VM of every process. Even not going that far, every administrator has the SE_DEBUG_NAME privilege, so, given enough knowledge, he can debug your application even if it's in an elevated process. Since you're running in an untrustworthy environment (attacker with root access) you *can't* build a perfect protection mechanism. As said before, just make it "secure enough" and don't waste too time on it. – Matteo Italia Jan 28 '10 at 00:15
2

Use some cryptology - e.g. sign the log file.

It may be sufficient to append a MD5 hash generated from the file contents and some "secret" only known to your application (note that this could be circumvented easily by reverse-engineering the application). Of course, there are more secure approaches involving certificates or other stuff, this all depends on your security requirement.

Lucero
  • 59,176
  • 9
  • 122
  • 152
  • 3
    That can't prevent tampering if the user has full access to the computer where the log file is being created. – Daniel Pryden Jan 27 '10 at 23:47
  • 2
    @Daniel is correct. To sign the log requires a private key, which you would have to store somehow with the application, and thus the user can hack the app, gain the private key, tamper with the file, then resign the tampered file with the stolen key. – AaronLS Jan 27 '10 at 23:49
  • 3
    +1, but there's still the problem that the signing key would have to stay in the executable (or, anyhow, in the executable there would be the code to obtain and use that key), so a malicious user could disassemble the exe to look for it. Obfuscating the executable would make the search much more difficult, but if your app doesn't use cryptography elsewhere a simple search for System.Security.Cryptography would lead an attacker to the signing code. Moral of the history: if someone is determined enough, he will break it; just make it "secure enough" to deter a casual attacker. EDIT Too late...:P – Matteo Italia Jan 27 '10 at 23:51
  • 1
    True, but that is a fundamental problem. While you cannot get a 100% secure solution here, one can get pretty good security, for instance by installing a certificate issued by your own CA with a non-exportable private key in the machine or user certificate store during installation, and using this to sign or encrypt the log. – Lucero Jan 27 '10 at 23:54
2

Any chance you can send the data online, and not store it in a file first? That would keep the data off the users' system, and require them to hack a running program if they're going to hack anything.

John Saunders
  • 160,644
  • 26
  • 247
  • 397
  • 1
    I like this idea. It would take a bit more infrastructure but would definitely work (Very few people would be dedicated enough to hack the running code). – ChaosPandion Jan 28 '10 at 00:01
1

As Ken mentioned, this is the same problem that DRM systems have. They use a variety of techniques to store keys where users won't (easily) be able to find them. You'd be combining a digital signature scheme with an overly-complex scheme for storing the key. Split it into multiple pieces, scattered over the user's system - some parts in the registry, some in files on the filesystem, some encoded in the modification dates and filenames of various innocent-looking files. Also, make the DRM subsystem of your code intentionally obscure, and difficult to debug.

Another alternative would be to send signature data to some remote system periodically (if you can depend on having an internet connection).

Mark Bessey
  • 19,598
  • 4
  • 47
  • 69