14

The Security and Design guidelines go to great length outlining various methods to make it more difficult for an attacker to compromise in-app billing implementation.

Especially noted is how easy it is to reverse-engineer a .apk file, even if obfuscated via Proguard. So they even recommend modifying all sample application code, especially "known entry points and exit points".

What I find missing is any reference to the wrapping certain verification methods in a single method, like the static Security.verify() which returns boolean: A good design practice (reducing code duplication, reusable, easier to debug, self-documenting, etc.) but all an attacker needs to do now is identify that method and make it always return true... So regardless how many times I used it, delayed or not delayed, randomly or not, it simply doesn't matter.

On the other hand, Java doesn't have macros like in C/C++, which allows reducing source code duplication, but doesn't have a single exit point for a verify() function.

So my questions:

Is there an inherent contention between the well known software engineering/coding practices and design for so called security? (in the context of Java/Android/secure transactions at least)

What can be done to mitigate the side-effects of "design for security" which seems like "shooting oneself in the foot" in terms of over-complicating software that could have been simpler, more maintainable and easier to debug?

Can you recommend good sources for further studying this subject?

Bill The Ape
  • 3,261
  • 25
  • 44
  • For In-app i believe Security class has to be your Application server. – Its not blank Jan 10 '12 at 02:59
  • The value of code obfuscation is hotly debated, which is *not* what stackoverflow is for. – President James K. Polk Jan 10 '12 at 03:00
  • @coder_For_Life22 Good sense of humor. :) See http://youtu.be/TnSNCXR9fbY – Bill The Ape Jan 10 '12 at 03:01
  • @Vincent You are correct. I only brought `Security.verify()` as an easy example. But relaying verification to a server isn't always better, as exemplified int the case described [here](http://stackoverflow.com/questions/8795517/why-is-signature-verification-on-remote-server-more-secure-than-on-device). – Bill The Ape Jan 10 '12 at 03:04
  • @GregS I didn't know that The value of code obfuscation is debated. I thought Google decided to bundle Proguard with ADT for a reason. Can you elaborate on that? Or provide a link? – Bill The Ape Jan 10 '12 at 03:09
  • keeping security class on server altst prevents user from seeing your public key.and for verifcation you can use code.eg.1-valid,2-new,3-invalid.but yes if some one really wants to hack your app then you cannot prevent him but can definetly make it tough for him or her :) – Its not blank Jan 10 '12 at 03:27
  • @Vincent The last tip in that [Security and Design](http://developer.android.com/guide/market/billing/billing_best_practices.html) refers to the public key and recommends, in case you do the verification locally, to construct it at runtime. – Bill The Ape Jan 10 '12 at 03:47

3 Answers3

7

As usual, it's a tradeoff. Making your code harder to reverse-engineer/crack involves making it less readable and harder to maintain. You decide how far to go, based on your intended user base, your own skills in the area, time/cost, etc. This is not specific to Android. Watch this Google I/O presentation for various stages of obfuscating and making your code tamper resistant. Then decide how far you are willing to go for your own apps.

On the other hand, you don't have to obfuscate/harden, etc. all of your code, just the part that deals with licensing, etc. That is usually a very small part of the whole codebase and doesn't really need to change that often, so you could probably live with it being hard to follow/maintain, etc. Just keep some notes on how it works, so you remind yourself 2 years later :).

Nikolay Elenkov
  • 52,576
  • 10
  • 84
  • 84
5

The counter productivity you are describing is the tip of the iceberg... No software is 100% bug-free on release, so what do you do when users start reporting problems?

How do you troubleshoot or debug field problems after you disabled logging, stack tracing and all kinds of other information that help reverse-engineers but also help the legitimate development team?

Regex Rookie
  • 10,432
  • 15
  • 54
  • 88
3

However tough the obfuscation methods are, there is always a way to reverse engineer them. I mean, if your software gets more popular among the hakers community, eventually someone will try to reverse-engineer it.

Obfuscation is just a method to make the process of reverse engineering tougher.

So is packing. I think many packing methods are available, but so is the process to reverse-engineer them.

You can check the www.tuts4you.com to see how tons of guides are being available.

I am not an expert like many others, but this is my experience in the process of learning reverse-engineering. Also recently I have seen a lot of guides for Android applications reverse-engineering. I have seen even in nullc0n (not sure) CTF, there was an app in Reversing Android. If you want, I can mention the site after searching.

Adinia
  • 3,722
  • 5
  • 40
  • 58
kidd0
  • 731
  • 2
  • 8
  • 25