Let me add to the existing answers to give you a bit of a broader view about software protection.
Hackers won't just use strace
, they'll use whatever tools they have in their tool chest but in order of increasing complexity, perhaps merely starting with something as simple as strings
in most cases. Most hackers I know are inherently lazy and will therefore take the path of least resistance. (NB: by hacker I mean a technically very skilled person, not a cracker - the latter often has the same skill set, but a different set of ethics).
Generally speaking from the perspective of the reverse engineer, just about anything can be "cracked" or worked around. The question is how much time and/or determination the attacker has. Consider that some student may do this just for giggles, while some "release groups" do this for fame within their "scene".
Let's consider hardware dongles for example. Most software authors/companies think that they somehow magically "buy" security when licensing some dongle. However, if they aren't careful with the implementation of the system it is as simple to work around as your attempt. Even when they are careful enough, it is often still possible to emulate a dongle although it will require some skill to extract the information on the dongle. Some dongles (I will not conceal that fact from you) are therefore "smart", meaning they contain a CPU or even a full-fledged embedded system. If vital parts of a software product are executed on the dongle and all that enters the dongle is the input and all that leaves the dongle is the output, that makes for a pretty good protection. However, it will equally annoy honest customers and attackers for the most part.
Or let's consider encryption as another example. Many developers don't seem to grasp the concept of a public and a secret key and think that "hiding" the secret key inside the code makes it somehow safer. Well, it doesn't. The code contains the algorithm and the secret key now, how convenient is that for the attacker?
The general problem in most of these cases is that on one hand you trust the users (because you sell to them), but on the other hand you don't trust them (because you try to protect your software somehow). When you look at it this way you can see how futile it actually is. Most of the time you will disgruntle the honest customers, while only delaying the attacker a little (software protection is binary: either it gives protection or it doesn't, i.e. it's already cracked).
Consider instead the path the makers of IDA Pro took. They watermark all their binaries before the user gets them. Then, in case those binaries get leaked, legal measures can be taken. And even without taking legal measures into account they can shame (and have shamed) those that leaked their product publicly. Also, if you are responsible for a leak you won't be sold any upgrades to the software and the makers of IDA will not do business with your employer. That's quite an incentive to keep your copy of IDA safe. Now, I get it, IDA is somewhat of a niche product, but still the approach is fundamentally different and doesn't have the same issues as the conventional attempts at protecting software.
Another alternative is of course to offer a service rather than a software. So you could give the user a token that the software sends to your server. The server then offers an update (or whatever the service) based on decoding the token (which we assume to be an encrypted message) and checking validity. In this case the user would only receive the token but never the secret key to decode it, which your server on the other hand would have to validate the token. Call it product key or whatever, there are dozens of ways one can imagine. The point is that you don't end up in that contradiction of trusting and mistrusting the user at the same time. You only mistrust the user and can for example blacklist her token if that has been abused.