848

I am developing a payment processing app for Android, and I want to prevent a hacker from accessing any resources, assets or source code from the APK file.

If someone changes the .apk extension to .zip then they can unzip it and easily access all the app's resources and assets, and using dex2jar and a Java decompiler, they can also access the source code. It's very easy to reverse engineer an Android APK file - for more details see Stack Overflow question Reverse engineering from an APK file to a project.

I have used the Proguard tool provided with the Android SDK. When I reverse engineer an APK file generated using a signed keystore and Proguard, I get obfuscated code.

However, the names of Android components remain unchanged and some code, like key-values used in the app, remains unchanged. As per Proguard documentation the tool can't obfuscate components mentioned in the Manifest file.

Now my questions are:

  1. How can I completely prevent reverse engineering of an Android APK? Is this possible?
  2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
  3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
sachin003
  • 8,983
  • 4
  • 21
  • 23
  • 134
    It sounds like you may be using "security by obscurity" if your payment processing scheme relies on the operation of the client remaining secret. – PeterJ Dec 13 '12 at 06:45
  • 1
    @PeterJ thanks for your reply, I am using Proguard tool to provide security to my app. http://developer.android.com/tools/help/proguard.html – sachin003 Dec 13 '12 at 07:02
  • 47
    Have you considered writing the important parts of the code in C/C++ and add them as a compiled library? They can be disassembled into assembly code, but reverse-engineering a large library from assembly is extremely timeconsuming. – Leo Dec 13 '12 at 08:40
  • @Leo I will try your suggestion to write important code in C/C++. – sachin003 Dec 13 '12 at 09:19
  • 64
    [Yes, nobody can decompile C code](http://boomerang.sourceforge.net) ... – dualed Dec 13 '12 at 10:56
  • 66
    Welcome to the fundamental issue of creating any digital asset. Hackers can get down to the machine instruction level, so if a computer can read the file then it can be hacked open/copied, an no amount of obfuscation or DRM can ever completely stop a determined hacker. If you need security then make sure that the private keys are never in source, and know at the design stage that only isolation (remote and/or dedicated hardware) can ever protect them. – Keith Dec 13 '12 at 11:08
  • 5
    I don't know why you want to obfuscate your code but I hope it's not necessary for the core security (for the user) of your payment processing app, because you should never design or even implement any cryptography parts yourself and instead use openly available standards. Hopefully this wasn't relevant. – Nathan Cooper Dec 13 '12 at 11:14
  • @NathanCooper By reading all these reply I decided to move core coding in java to Native code - C/C++ & for rest java code proguard + DexGuard is okay. Other better solutions are always welcome. – sachin003 Dec 13 '12 at 11:19
  • 5
    @dualed: Have you seen the output that Boomerang gives? It's so terrible it's completely useless for anything but the most trivial programs. – BlueRaja - Danny Pflughoeft Dec 13 '12 at 12:15
  • 2
    @BlueRaja-DannyPflughoeft The "fanciness" of the generated code does not really matter. Of course it can not infer the original function and variable names because that information is simply not in the file. But any string in the executable is in there as plain text and any nifty algorithm you may want to protect will work just as well as generated code or assembly no matter how bad it looks. On top of that you lose compatibility, as native code for ARM will certainly not run on Atom, etc. – dualed Dec 13 '12 at 13:03
  • 17
    Note that depending on what your payment processing app does, there may be regulatory and legal policy that affects your app and could potetially expose you to severe penalties: see PCI compliance, starting with http://www.pcicomplianceguide.org/pcifaqs.php#11. – Brenton Fletcher Dec 13 '12 at 13:20
  • 2
    @dualed: Sure it can be decompiled. Anything can be reverse-engineered, the question is at what cost (in time or money). I haven't looked at the output from Boomerang or Dedexer, but I would assume that the output from Boomerang would be significantly harder to interpret which is what the OP asked for. If you want to take it one level further, write directly in assembly. As for the algorithm working just as well in assembly or generated code, well that is sort of the point. If you just want to use the algorithm directly, why reverse-engineer it? – Leo Dec 13 '12 at 14:13
  • 1
    Oh, and Boomerang does not support ARM apparently. – Leo Dec 13 '12 at 14:33
  • @Leo That is the point. The cost is the same for the attacker. They just use a different tool. On the other hand the OP may now have the false security that their native code is not (easily) readable. And since DexGuard can obfuscate string constants, it is probably more effective. – dualed Dec 13 '12 at 14:42
  • 3
    @dualed sorry, but decompiled Java or C# is far more readable than decompiled/disassembled native code, so the cost for the attacker is not the same, not even close. Even on logarithmic scale )) – Zar Shardan Dec 13 '12 at 14:53
  • @ZarShardan Oh yes, and you make it so much harder for an attacker to find sensitive sections of code by moving them to a separate file ... – dualed Dec 13 '12 at 15:04
  • 2
    Google itself tried to tackle on piracy by saving encrypted apks in `/mnt/asec`, starting in JB. I think the whole thing was disabled due to bugs. I don't know what is the current state of this mechanism, but even if it were in place it would only make things harder for JB+ non-rooted devices. – Mister Smith Dec 13 '12 at 17:09
  • 6
    @dualed: `The "fanciness" of the generated code does not really matter.` Er, yes it does, especially when the output-C is essentially the same as the input-assembly. Not only are the function names not there, but I've found that most of the time, it can't even *recognize* a function call, or basic combinations of arithmetic, or anything else that would make the C easier to read than the assembly. The output is basically useless. And that's when Boomerang *can* actually decompile a program without crashing or erroring out, which is also pretty rare. – BlueRaja - Danny Pflughoeft Dec 13 '12 at 18:05
  • 1
    Even assuming an equivalently high level of competence in interpreting both native assembly and, say, Java bytecode, it's clearly going to be quicker to make something out of the Java because of the extra information in bytecode and slightly higher level instructions. – Elliott Dec 13 '12 at 22:44
  • 1
    Are you sure you should be coding security sensitve applications, if you think that programs can be absolutely protected against reverse engineering and tampering, without tamper-resistant hardware? (Or even with?) – Kaz Dec 14 '12 at 00:15
  • I don't believe in security by obscurity - Everybody knows how common encryption algorithms work, but it doesn't make them less secure. As long as the keys are well protected. I don't see how this payment processing application is any different. Knowing its internals won't help the attacker to break into the payment system if the design is solid and keys/password remain secret and different for every client. – Zar Shardan Dec 14 '12 at 01:58
  • Possible Duplicate of This [Link][1] ! [1]: http://stackoverflow.com/questions/3593420/android-getting-source-code-from-an-apk-file – Hardik Thaker Dec 15 '12 at 19:50
  • 3
    What is the name of this payment app so I know not to use it? The OP seems to miss the point that security through obscurity is not security at all... – Eloff Dec 16 '12 at 15:32
  • @HardikThaker This is not duplicate question like you mentioned in that link, read my questions again. – sachin003 Dec 17 '12 at 06:41
  • See also [Android Game Keeps Getting Hacked](http://stackoverflow.com/questions/5600143/android-game-keeps-getting-hacked) – rds Dec 21 '12 at 17:17
  • you can use [dexGuard](http://www.saikoa.com/dexguard), it's not 100% secure, but it's the most secure, and it's not free! – M D P Jul 07 '14 at 22:34
  • Check this https://1belong2jesus.wordpress.com/2014/07/29/configure-proguard/ – Deniz Jul 29 '14 at 09:09
  • what is wrong in generating a signed apk? doesnt that solve this problem? – roronoa_zoro Oct 30 '18 at 04:24
  • Use proguard it's bundled with android studio and free of cost. – Shivam Yadav Mar 15 '19 at 10:26

32 Answers32

404

 1. How can I completely avoid reverse engineering of an Android APK? Is this possible?

AFAIK, there is not any trick for complete avoidance of reverse engineering.

And also very well said by @inazaruk: Whatever you do to your code, a potential attacker is able to change it in any way she or he finds it feasible. You basically can't protect your application from being modified. And any protection you put in there can be disabled/removed.

 2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?

You can do different tricks to make hacking harder though. For example, use obfuscation (if it's Java code). This usually slows down reverse engineering significantly.

 3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?

As everyone says, and as you probably know, there's no 100% security. But the place to start for Android, that Google has built in, is ProGuard. If you have the option of including shared libraries, you can include the needed code in C++ to verify file sizes, integration, etc. If you need to add an external native library to your APK's library folder on every build, then you can use it by the below suggestion.

Put the library in the native library path which defaults to "libs" in your project folder. If you built the native code for the 'armeabi' target then put it under libs/armeabi. If it was built with armeabi-v7a then put it under libs/armeabi-v7a.

<project>/libs/armeabi/libstuff.so
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Bhavesh Patadiya
  • 25,740
  • 15
  • 81
  • 107
  • 1
    For Payment transaction I've used ISO 8585 standard, right now schema for this standard is in key-value pair using HashMap collection of Java & when I do reverse engineering on apk I'll get all schema.Is it possible to avoid schema get exposed via reverse engineering.? Can your last suggestion of Share libraries useful in this case? Doy you have any links so that I can get exposure to the share libraries in android. – sachin003 Dec 13 '12 at 07:29
  • 4
    how about encrypting your strings in the code and decrypting them at runtime? If you do the decryption on a remote server, like other people suggested, you don't get the problem that the decryption key is in the sources. – kutschkem Dec 13 '12 at 12:33
  • yes, encryption is way, but it is not sure to be notHacked. If i am encrypting String in order to decrypt them, i need one unique id in the code. and if anyone able to decompile it then it will be very easy to get the Unique id. – Bhavesh Patadiya Dec 13 '12 at 12:47
  • why you added *Edited* stuff? its all regular. – Mohammed Azharuddin Shaikh Dec 14 '12 at 10:25
  • @hotveryspicy: yes i have now removed the "edited" mark from answer.i have edited my answer beacause he wanted to know more about how Share libraries useful in this case. – Bhavesh Patadiya Dec 14 '12 at 10:51
  • @Bhavesh yes your correct, I was looking for shared libaries, I'm still working on it, anyways thanks for reply. – sachin003 Dec 17 '12 at 06:34
  • how to hide string inside " " i use proguard but it's not hide or convert value inside " " is there anyway to hide that? – Bhavin Patel Mar 25 '17 at 07:05
  • this is old. but for someone looking for extra information. you don't have to use one unique key. see this: http://www.androidauthority.com/how-to-hide-your-api-key-in-android-600583/ – Dreaded semicolon Mar 29 '17 at 10:16
  • If you you want to check the apk code is visible or not please try this online apk decompiler tool https://www.unboxapk.com – Abdul Mar 13 '22 at 12:42
138

AFAIK, you cannot protect the files in the /res directory anymore than they are protected right now.

However, there are steps you can take to protect your source code, or at least what it does if not everything.

  1. Use tools like ProGuard. These will obfuscate your code, and make it harder to read when decompiled, if not impossible.
  2. Move the most critical parts of the service out of the app, and into a webservice, hidden behind a server side language like PHP. For example, if you have an algorithm that's taken you a million dollars to write. You obviously don't want people stealing it out of your app. Move the algorithm and have it process the data on a remote server, and use the app to simply provide it with the data. Or use the NDK to write them natively into .so files, which are much less likely to be decompiled than apks. I don't think a decompiler for .so files even exists as of now (and even if it did, it wouldn't be as good as the Java decompilers). Additionally, as @nikolay mentioned in the comments, you should use SSL when interacting between the server and device.
  3. When storing values on the device, don't store them in a raw format. For example, if you have a game, and you're storing the amount of in game currency the user has in SharedPreferences. Let's assume it's 10000 coins. Instead of saving 10000 directly, save it using an algorithm like ((currency*2)+1)/13. So instead of 10000, you save 1538.53846154 into the SharedPreferences. However, the above example isn't perfect, and you'll have to work to come up with an equation that won't lose currency to rounding errors etc.
  4. You can do a similar thing for server side tasks. Now for an example, let's actually take your payment processing app. Let's say the user has to make a payment of $200. Instead of sending a raw $200 value to the server, send a series of smaller, predefined, values that add up to $200. For example, have a file or table on your server that equates words with values. So let's say that Charlie corresponds to $47, and John to $3. So instead of sending $200, you can send Charlie four times and John four times. On the server, interpret what they mean and add it up. This prevents a hacker from sending arbitrary values to your server, as they do not know what word corresponds to what value. As an added measure of security, you could have an equation similar to point 3 for this as well, and change the keywords every n number of days.
  5. Finally, you can insert random useless source code into your app, so that the hacker is looking for a needle in a haystack. Insert random classes containing snippets from the internet, or just functions for calculating random things like the Fibonacci sequence. Make sure these classes compile, but aren't used by the actual functionality of the app. Add enough of these false classes, and the hacker would have a tough time finding your real code.

All in all, there's no way to protect your app 100%. You can make it harder, but not impossible. Your web server could be compromised, the hacker could figure out your keywords by monitoring multiple transaction amounts and the keywords you send for it, the hacker could painstakingly go through the source and figure out which code is a dummy.

You can only fight back, but never win.

Raghav Sood
  • 81,899
  • 22
  • 187
  • 195
  • 152
    Instead of doing tricks with values you send to your server, use SSL and properly validate the server certificate. Security by obscurity is generally a bad idea. – Nikolay Elenkov Dec 13 '12 at 07:19
  • 1
    @NikolayElenkov That would only protect the values during the transfer from device to server. By obscuring it, you can protect it while it is in the RAM, and in device storage. – Raghav Sood Dec 13 '12 at 07:20
  • Well, true :P But there's no point just sitting back and letting the hacker have a free run. – Raghav Sood Dec 13 '12 at 07:27
  • 3
    BTW, the same goes for storing things in files: use proper obfuscation or encryption, not easy tricks. – Nikolay Elenkov Dec 13 '12 at 07:28
  • 61
    **you can insert random useless source code into your app**. This can't really help either. This will only bloat your application up, while making it harder to maintain as well. – Anirudh Ramanathan Dec 13 '12 at 08:17
  • 1
    @Cthulhu these aren't meant to help with the server side or the mid transaction. These are meant to help with the case where someone decompiles your app and goes through the code to look at your implementation etc. with the intention to steal it. If you have random code, it's harder for them to find what they want. – Raghav Sood Dec 13 '12 at 08:21
  • 7
    *harder?* Yes. But they don't give you anything but a false sense of security. It isn't that hard to weed out the code which is never executed, so why bother doing that. – Anirudh Ramanathan Dec 13 '12 at 08:22
  • 1
    I get that, and I fully agree. While it may not be amazingly secure or a major roadblock to the hacker, it does provide some protection, even if it is minor. – Raghav Sood Dec 13 '12 at 08:22
  • 4
    You could also fake use the useless code and send the data to the server which will discard it. False security maybe, but a pain in the ass for the potential hacker, right? – Benoit Duffez Dec 13 '12 at 09:10
  • 20
    If your algorithm is worth a million dollars, then just because there's no decompiler for `.so` files doesn't mean I can't read assembly :) Most of these fall into the same trap, just obfuscating instead of properly protecting yourself. Obfuscation only works if it's not worth an attacker's time to follow through, so if you build something on these techniques you'd better hope they don't get popular, otherwise you're screwed because all of a sudden your codebase is unmaintainable and it needs huge changes. – Phoshi Dec 13 '12 at 13:07
  • 2
    If you try things like replacing numbers with predefined words or saving values after passing them through functions, then you are on the wrong path already. These are the most basic cryptographic techniques that have been used and broken long time ago through the simplest cryptanalitic attacks and they won't last long in your system. – Bogdan Alexandru Jun 14 '13 at 09:26
  • 29
    I don't get why this answer has such a high score. 3. and 4. for one are just plain silly and will amount to no security at all. – Matti Virkkunen Nov 08 '13 at 22:54
  • @RaghavSood :Need your help .Take a look please http://stackoverflow.com/questions/22602559/how-do-implement-this-in-android –  Mar 25 '14 at 05:26
  • 1
    number 3, 4 and 5 just seem like such a pain. what if I work in a team? I'd be purposely making the code unreadable and unmaintainable for my present (and future) colleagues. what if I leave the team at any point in time after adding a bunch of cryptic stuff to the code? – user3453281 Jun 16 '15 at 16:07
  • 4
    `So let's say that Charlie corresponds to $47, and John to $3.` It's a joke, right? – Bora M. Alper Jul 22 '16 at 10:06
  • 1
    FTR, a decompiler for `.so` files does exist, it's HexRays of IdaPro. It's far from perfect obviously, but still makes stuff easier to reverse-engineer, because 3-4 lines of assembly may equal to one of C. – Hi-Angel Oct 23 '17 at 17:48
  • This is a terrible answer and I have no idea why it is so upvoted. Please do not follow this advice, except for number 1, and the web service part of number 2 (but not the `.so` part - they can be decompiled too!) – Hannes Hertach Aug 31 '20 at 08:30
  • 1
    Most of given items are silly and shows while author knows programming from a developper point of view, he hasn't got the reverse engineering point of view. For 3 I'd just say most people edit value at run time anyway. Nothing you can do about it. If you hide real value behind display value they'll still get it through debugging. For coins and such datas your one and only solution is to store it server side. Period. – user Oct 14 '20 at 11:21
134

At no point in the history of computing has it ever been possible to prevent reverse-engineering of software when you give a working copy of it to your attacker. Also, in most likelihood, it never will be possible.

With that understood, there is an obvious solution: don't give your secrets to your attacker. While you can't protect the contents of your APK, what you can protect is anything you don't distribute. Typically this is server-side software used for things like activation, payments, rule-enforcement, and other juicy bits of code. You can protect valuable assets by not distributing them in your APK. Instead, set up a server that responds to requests from your app, "uses" the assets (whatever that might mean) and then sends the result back to the app. If this model doesn't work for the assets you have in mind, then you may want to re-think your strategy.

Also, if your primary goal is to prevent app piracy: don't even bother. You've already burned more time and money on this problem than any anti-piracy measure could possibly ever hope to save you. The return on investment for solving this problem is so low that it doesn't make sense to even think about it.

tylerl
  • 30,197
  • 13
  • 80
  • 113
  • 23
    The first paragraph is the best answer. If your attacker controls the hardware, they will always be able to defeat your software somehow. Anything that truly needs to be protected must stay on hardware you control, it's as simple as that. And the final paragraph, about ROI, is spot on as well. – Daniel Pryden Dec 13 '12 at 17:18
102

First rule of app security: Any machine to which an attacker gains unrestricted physical or electronic access now belongs to your attacker, regardless of where it actually is or what you paid for it.

Second rule of app security: Any software that leaves the physical boundaries inside which an attacker cannot penetrate now belongs to your attacker, regardless of how much time you spent coding it.

Third rule: Any information that leaves those same physical boundaries that an attacker cannot penetrate now belongs to your attacker, no matter how valuable it is to you.

The foundations of information technology security are based on these three fundamental principles; the only truly secure computer is the one locked in a safe, inside a Farraday cage, inside a steel cage. There are computers that spend most of their service lives in just this state; once a year (or less), they generate the private keys for trusted root certification authorities (in front of a host of witnesses with cameras recording every inch of the room in which they are located).

Now, most computers are not used under these types of environments; they're physically out in the open, connected to the Internet over a wireless radio channel. In short, they're vulnerable, as is their software. They are therefore not to be trusted. There are certain things that computers and their software must know or do in order to be useful, but care must be taken to ensure that they can never know or do enough to cause damage (at least not permanent damage outside the bounds of that single machine).

You already knew all this; that's why you're trying to protect the code of your application. But, therein lies the first problem; obfuscation tools can make the code a mess for a human to try to dig through, but the program still has to run; that means the actual logic flow of the app and the data it uses are unaffected by obfuscation. Given a little tenacity, an attacker can simply un-obfuscate the code, and that's not even necessary in certain cases where what he's looking at can't be anything else but what he's looking for.

Instead, you should be trying to ensure that an attacker cannot do anything with your code, no matter how easy it is for him to obtain a clear copy of it. That means, no hard-coded secrets, because those secrets aren't secret as soon as the code leaves the building in which you developed it.

These key-values you have hard-coded should be removed from the application's source code entirely. Instead, they should be in one of three places; volatile memory on the device, which is harder (but still not impossible) for an attacker to obtain an offline copy of; permanently on the server cluster, to which you control access with an iron fist; or in a second data store unrelated to your device or servers, such as a physical card or in your user's memories (meaning it will eventually be in volatile memory, but it doesn't have to be for long).

Consider the following scheme. The user enters their credentials for the app from memory into the device. You must, unfortunately, trust that the user's device is not already compromised by a keylogger or Trojan; the best you can do in this regard is to implement multi-factor security, by remembering hard-to-fake identifying information about the devices the user has used (MAC/IP, IMEI, etc), and providing at least one additional channel by which a login attempt on an unfamiliar device can be verified.

The credentials, once entered, are obfuscated by the client software (using a secure hash), and the plain-text credentials discarded; they have served their purpose. The obfuscated credentials are sent over a secure channel to the certificate-authenticated server, which hashes them again to produce the data used to verify the validity of the login. This way, the client never knows what is actually compared to the database value, the app server never knows the plaintext credentials behind what it receives for validation, the data server never knows how the data it stores for validation is produced, and a man in the middle sees only gibberish even if the secure channel were compromised.

Once verified, the server transmits back a token over the channel. The token is only useful within the secure session, is composed of either random noise or an encrypted (and thus verifiable) copy of the session identifiers, and the client application must send this token on the same channel to the server as part of any request to do something. The client application will do this many times, because it can't do anything involving money, sensitive data, or anything else that could be damaging by itself; it must instead ask the server to do this task. The client application will never write any sensitive information to persistent memory on the device itself, at least not in plain text; the client can ask the server over the secure channel for a symmetric key to encrypt any local data, which the server will remember; in a later session the client can ask the server for the same key to decrypt the data for use in volatile memory. That data won't be the only copy, either; anything the client stores should also be transmitted in some form to the server.

Obviously, this makes your application heavily dependent on Internet access; the client device cannot perform any of its basic functions without proper connection to and authentication by the server. No different than Facebook, really.

Now, the computer that the attacker wants is your server, because it and not the client app/device is the thing that can make him money or cause other people pain for his enjoyment. That's OK; you get much more bang for your buck spending money and effort to secure the server than in trying to secure all the clients. The server can be behind all kinds of firewalls and other electronic security, and additionally can be physically secured behind steel, concrete, keycard/pin access, and 24-hour video surveillance. Your attacker would need to be very sophisticated indeed to gain any kind of access to the server directly, and you would (should) know about it immediately.

The best an attacker can do is steal a user's phone and credentials and log in to the server with the limited rights of the client. Should this happen, just like losing a credit card, the legitimate user should be instructed to call an 800 number (preferably easy to remember, and not on the back of a card they'd carry in their purse, wallet or briefcase which could be stolen alongside the mobile device) from any phone they can access that connects them directly to your customer service. They state their phone was stolen, provide some basic unique identifier, and the account is locked, any transactions the attacker may have been able to process are rolled back, and the attacker is back to square one.

Community
  • 1
  • 1
KeithS
  • 70,210
  • 21
  • 112
  • 164
  • 1
    perfect answer !! i just loved your way to get data from server with some encrypted token , i think this is next to impossible to decode after that . – dharmendra Jul 15 '13 at 13:03
  • I know this is a bit late but what about accessing the accessing the server part. Services like Microsoft azure provides you something like this to access their server: MobileServiceClient mClient = new MobileServiceClient( "MobileServiceUrl", // Replace with the above Site URL "AppKey", // replace with the Application Key this) and pretty much anyone who has access to that can access their server end edit it – edwinj Aug 06 '15 at 13:13
  • @edwinj - No problem in computer science that cannot be solved with another layer of indirection. Your snippet gives the basic idea for accessing an Azure mobile client service; it provides a basic level of security against "drive-bys" of Microsoft's front door. You can in turn add additional layers, such as requiring a session key (basically what the encrypted token is) on any service call, and to get that key, they must first authenticate with a combination of knowledge of the credentials and the encryption scheme. – KeithS Jan 25 '18 at 14:26
  • 1
    One of the best answers. – debo.stackoverflow May 22 '19 at 11:21
68

 1. How can I completely avoid reverse engineering of an Android APK? Is this possible?

This isn't possible

 2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?

When somebody change a .apk extension to .zip, then after unzipping, someone can easily get all resources (except Manifest.xml), but with APKtool one can get the real content of the manifest file too. Again, a no.

 3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?

Again, no, but you can prevent upto some level, that is,

  • Download a resource from the Web and do some encryption process
  • Use a pre-compiled native library (C, C++, JNI, NDK)
  • Always perform some hashing (MD5/SHA keys or any other logic)

Even with Smali, people can play with your code. All in all, it's not POSSIBLE.

matthias krull
  • 4,389
  • 3
  • 34
  • 54
Mohammed Azharuddin Shaikh
  • 41,633
  • 14
  • 96
  • 115
  • 1
    RE - "This isn't possible": Sounds like APK system needs some way of encryption somehow. Matlab has a similar problem of needing to protect IP in deployed applications. [Matlab's solution was to use some encryption](http://www.mathworks.com/products/compiler/description1.html). Do you think maybe Android APK needs something similar? – Trevor Boyd Smith Dec 13 '12 at 14:12
  • 9
    @TrevorBoydSmith: Encryption doesn't help much when the OS is open source and rootable. The system needs a key in order to decrypt the APK and run stuff. And if the system has a key, and i have unfettered access to the system, then i know where to find the key and can get to it. Meaning *i have the key now too*. – cHao Dec 13 '12 at 22:09
  • 1
    @cHao. I purposely did not specify how to do the encryption because encryption is outside of my domain of knowledge. – Trevor Boyd Smith Dec 14 '12 at 19:14
  • 4
    @TrevorBoydSmith: It's the "how to do" part, though, that kills the whole idea. There's simply no way to execute encrypted code directly; at some point, the decrypted code has to be available. Which means (1) there must be a key (that i, as root, probably have access to), and (2) I might even be able to find the clear copy in RAM and just not worry about encryption anyway. – cHao Dec 14 '12 at 19:59
  • 1
    @cHao I think you are misunderstanding my idea a little. ¶ *Everything* in security is balanced against cost. The purpose of adding a new security feature to the APK system, just like the purpose of physical security, passwords, CAPTCHA, *encryption*, and the purpose of virtually every other security measure is to increase the cost of circumvention, not to make circumvention impossible. Also see [here](http://security.stackexchange.com/questions/26656/if-we-know-captcha-can-be-beat-why-are-we-still-using-them/26667#26667) or [here](http://security.stackexchange.com/a/26659/6499). – Trevor Boyd Smith Jan 08 '13 at 14:16
  • 3
    @TrevorBoydSmith: Problem is, in this case you simply can't raise the cost enough to make it not worthwhile. We're not talking about brute-forcing keys here; we're talking about *already having* them -- the OS has to have keys, and we have the OS. The only way to fix that would be to make the OS unrootable. Good luck with that; even Apple can't manage it. :) – cHao Jan 09 '13 at 14:23
  • @cHao The way you said "you simply *can't* raise the cost enough" seems to imply you think raising the cost is impossible. I admit raising the cost is more difficult but definitely not impossible ever. – Trevor Boyd Smith Jan 09 '13 at 19:38
  • 3
    @TrevorBoydSmith: I don't think raising the cost *in general* is impossible. I think (and say), *in particular*, that your suggestion is impossible -- because *it is*. MATLAB is not Android, and has certain freedoms that Android doesn't. In particular, it has obfuscation on its side; it's a lot harder to hide an encryption key. Android can't do that. Anyone who has the source code would know where the keys are hiding, and would have a tool to retrieve them within 10 minutes of the feature being announced. It's not just possible to do that; it's downright *trivial*. – cHao Jan 09 '13 at 21:39
  • @cHao you keep insisting on some form of encryption involving a static key that never changes and is in the same place. That sounds like a terrible idea and I am not sure why you keep on insisting on it. ¶ This entire conversation is not constructive so I am not going to respond anymore. – Trevor Boyd Smith Jan 10 '13 at 00:29
  • 6
    @TrevorBoydSmith: I've insisted nothing of the sort. What i'm insisting is that static, changing, moving, etc **do not matter**. In an open source OS, encryption alone can not protect the code from anyone who could reverse engineer it. Because i can read the code that would do the decryption, regardless of how the key is acquired, used, and/or stored, i can see how you did it and replicate it -- even more easily than i could reverse some super-secret app code. – cHao Jan 11 '13 at 15:22
  • 3
    @TrevorBoydSmith Since you clearly aren't getting this, let's make it simple: The CPU is in my complete control. The CPU must be able to see decrypted code to run it. Therefore, I can see decrypted code. Q.E.D. – Jonathon Reinhart Jul 14 '16 at 18:21
  • Changing extension to be able to unzip file... really? I guess renaming files magically alter the file's structure or something – Ben May 30 '18 at 16:40
43

100% avoidance of reverse engineering of the Android APK is not possible, but you can use these ways to avoid extracting more data, like source code, assets form your APK, and resources:

  1. Use ProGuard to obfuscate application code

  2. Use NDK using C and C++ to put your application core and secure part of code in .so files

  3. To secure resources, don't include all important resources in the assets folder with APK. Download these resources at the time of application first start up.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
ρяσѕρєя K
  • 132,198
  • 53
  • 198
  • 213
  • 14
    Third one is really easing the attackers' work. Sniffing network communication is easier than reverse engineering. – totten Jan 26 '15 at 15:28
  • To solve the problem of the third one, one could encrypt the downloaded content and/or use an encrypted connection (e.g. SSL/TLS) –  Oct 16 '15 at 11:29
  • 2
    Encrypting the connection protects against people who sniff or modify traffic. In the case where the user himself is malicious (i.e. he has your apk and he has trying to hack it), he will still get the content by using your app, extracting resources as a root user; but yes it does help against simple sniffing attacks. – Kevin Lee Mar 13 '16 at 18:01
  • Adding to that : 4)use dexguard for higher obfuscation but it's paid 5) use OBB file for assets download at time of downloading app ,it will help for reducing app size aswell – Ashok Kumar Jul 03 '19 at 02:39
30

Here are a few methods you can try:

  1. Use obfuscation and tools like ProGuard.
  2. Encrypt some part of the source code and data.
  3. Use a proprietary inbuilt checksum in the app to detect tampering.
  4. Introduce code to avoid loading in a debugger, that is, let the app have the ability to detect the debugger and exit / kill the debugger.
  5. Separate the authentication as an online service.
  6. Use application diversity
  7. Use the finger printing technique, for example, hardware signatures of the devices from different subsystem before authenticating the device.
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Shan
  • 5,054
  • 12
  • 44
  • 58
25

 1. How can I completely avoid reverse engineering of an Android APK? Is this possible?

Impossible

 2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?

Impossible

 3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?

More tough - possible, but in fact it will be more tough mostly for the average user, who is just googling for hacking guides. If somebody really wants to hack your app - it will be hacked, sooner or later.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
janot
  • 13,578
  • 1
  • 27
  • 57
22

 1. How can I completely avoid reverse engineering of an Android APK? Is this possible?

That is impossible

 2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?

Developers can take steps such as using tools like ProGuard to obfuscate their code, but up until now, it has been quite difficult to completely prevent someone from decompiling an app.

It's a really great tool and can increase the difficulty of 'reversing' your code whilst shrinking your code's footprint.

Integrated ProGuard support: ProGuard is now packaged with the SDK Tools. Developers can now obfuscate their code as an integrated part of a release build.

 3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?

While researching, I came to know about HoseDex2Jar. This tool will protect your code from decompiling, but it seems not to be possible to protect your code completely.

Some of helpful links, you can refer to them.

Community
  • 1
  • 1
RobinHood
  • 10,897
  • 4
  • 48
  • 97
21

The main question here is that can the dex files be decompiled and the answer is they can be "sort of". There are disassemblers like dedexer and smali.

ProGuard, properly configured, will obfuscate your code. DexGuard, which is a commercial extended version of ProGuard, may help a bit more. However, your code can still be converted into smali and developers with reverse-engineering experience will be able to figure out what you are doing from the smali.

Maybe choose a good license and enforce it by the law in best possible way.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Abhishek Sabbarwal
  • 3,758
  • 1
  • 26
  • 41
12

Your client should hire someone that knows what they are doing, who can make the right decisions and can mentor you.

Talk above about you having some ability to change the transaction processing system on the backend is absurd - you shouldn't be allowed to make such architectural changes, so don't expect to be able to.

My rationale on this:

Since your domain is payment processing, its safe to assume that PCI DSS and/or PA DSS (and potential state/federal law) will be significant to your business - to be compliant you must show you are secure. To be insecure then find out (via testing) that you aren't secure, then fixing, retesting, etcetera until security can be verified at a suitable level = expensive, slow, high-risk way to success. To do the right thing, think hard up front, commit experienced talent to the job, develop in a secure manner, then test, fix (less), etcetera (less) until security can be verified at a suitable level = inexpensive, fast, low-risk way to success.

straya
  • 5,002
  • 1
  • 28
  • 35
10

If we want to make reverse engineering (almost) impossible, we can put the application on a highly tamper-resistant chip, which executes all sensitive stuff internally, and communicates with some protocol to make controlling GUI possible on the host. Even tamper-resistant chips are not 100% crack proof; they just set the bar a lot higher than software methods. Of course, this is inconvenient: the application requires some little USB wart which holds the chip to be inserted into the device.

The question doesn't reveal the motivation for wanting to protect this application so jealously.

If the aim is to improve the security of the payment method by concealing whatever security flaws the application may have (known or otherwise), it is completely wrongheaded. The security-sensitive bits should in fact be open-sourced, if that is feasible. You should make it as easy as possible for any security researcher who reviews your application to find those bits and scrutinize their operation, and to contact you. Payment applications should not contain any embedded certificates. That is to say, there should be no server application which trusts a device simply because it has a fixed certificate from the factory. A payment transaction should be made on the user's credentials alone, using a correctly designed end-to-end authentication protocol which precludes trusting the application, or the platform, or the network, etc.

If the aim is to prevent cloning, short of that tamper-proof chip, there isn't anything you can do to protect the program from being reverse-engineered and copied, so that someone incorporates a compatible payment method into their own application, giving rise to "unauthorized clients". There are ways to make it difficult to develop unauthorized clients. One would be to create checksums based on snapshots of the program's complete state: all state variables, for everything. GUI, logic, whatever. A clone program will not have exactly the same internal state. Sure, it is a state machine which has similar externally visible state transitions (as can be observed by inputs and outputs), but hardly the same internal state. A server application can interrogate the program: what is your detailed state? (i.e. give me a checksum over all of your internal state variables). This can be compared against dummy client code which executes on the server in parallel, going through the genuine state transitions. A third party clone will have to replicate all of the relevant state changes of the genuine program in order to give the correct responses, which will hamper its development.

Kaz
  • 55,781
  • 9
  • 100
  • 149
10

As someone who worked extensively on payment platforms, including one mobile payments application (MyCheck), I would say that you need to delegate this behaviour to the server. No user name or password for the payment processor (whichever it is) should be stored or hardcoded in the mobile application. That's the last thing you want, because the source can be understood even when if you obfuscate the code.

Also, you shouldn't store credit cards or payment tokens on the application. Everything should be, again, delegated to a service you built. It will also allow you, later on, to be PCI-compliant more easily, and the credit card companies won't breathe down your neck (like they did for us).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Itai Sagi
  • 5,537
  • 12
  • 49
  • 73
8

The other upvoted answers here are correct. I just want to provide another option.

For certain functionality that you deem important you can host the WebView control in your app. The functionality would then be implemented on your web server. It will look like it's running in your application.

Sarel Botha
  • 12,419
  • 7
  • 54
  • 59
7

Agreed with @Muhammad Saqib here: https://stackoverflow.com/a/46183706/2496464

And @Mumair gives good starting steps: https://stackoverflow.com/a/35411378/474330

It is always safe to assume that everything you distribute to your user's device, belong to the user. Plain and simple. You may be able to use the latest tools and procedure to encrypt your intellectual property, but there is no way to prevent a determined person from 'studying' your system. And even if the current technology may make it difficult for them to gain unwanted access, there might be some easy way tomorrow, or even just the next hour!

Thus, here comes the equation:

When it comes to money, we always assume that client is untrusted.

Even in as simple as an in-game economy. (Especially in games! There are more 'sophisticated' users there and loopholes spread in seconds!)

How do we stay safe?

Most, if not all, of our key processing systems (and database of course) located on the server side. And between the client and server, lies encrypted communications, validations, etc. That is the idea of a thin client.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Zennichimaro
  • 5,236
  • 6
  • 54
  • 78
5

I suggest you to look at Protect Software Applications from Attacks. It's a commercial service, but my friend's company used this and they are glad to use it.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Talha
  • 12,673
  • 5
  • 49
  • 68
4

APK signature scheme v2 in Android 7.0 (Nougat)

The PackageManager class now supports verifying apps using the APK signature scheme v2. The APK signature scheme v2 is a whole-file signature scheme that significantly improves verification speed and strengthens integrity guarantees by detecting any unauthorized changes to APK files.

To maintain backward-compatibility, an APK must be signed with the v1 signature scheme (JAR signature scheme) before being signed with the v2 signature scheme. With the v2 signature scheme, verification fails if you sign the APK with an additional certificate after signing with the v2 scheme.

APK signature scheme v2 support will be available later in the N Developer Preview.

http://developer.android.com/preview/api-overview.html#apk_signature_v2

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
thiagolr
  • 6,909
  • 6
  • 44
  • 64
  • 2
    Apk signature v2 only prevents resources from being tampered with, but doesn't make reverse engineering any harder… – Louis CAD Oct 20 '17 at 17:51
  • 1
    Furthermore you can just remove the signature and re-sign it. The v2 signature is just a block of data in the APK file. – Robert Feb 28 '18 at 16:10
4

There is no way to completely avoid reverse engineering of an APK file. To protect application assets, resources, you can use encryption.

  • Encryption will make harder to use it without decryption. Choosing some strong encryption algorithm will make cracking harder.
  • Adding some spoof code into your main logic to make it harder to crack.
  • If you can write your critical logic in any native language and that surely will make harder to decompile.
  • Using any third party security frameworks, like Quixxi
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
immutable
  • 2,174
  • 3
  • 20
  • 27
3

Basically it's not possible. It will never be possible. However, there is hope. You can use an obfuscator to make it so some common attacks are a lot harder to carry out including things like:

  1. Renaming methods/classes (so in the decompiler you get types like a.a)
  2. Obfuscating control flow (so in the decompiler the code is very hard to read)
  3. Encrypting strings and possibly resources

I'm sure there are others, but that's the main ones. I work for a company called PreEmptive Solutions on a .NET obfuscator. They also have a Java obfuscator that works for Android as well one called DashO.

Obfuscation always comes with a price, though. Notably, performance is usually worse, and it requires some extra time around releases usually. However, if your intellectual property is extremely important to you, then it's usually worth it.

Otherwise, your only choice is to make it so that your Android application just passes through to a server that hosts all of the real logic of your application. This has its own share of problems, because it means users must be connected to the Internet to use your app.

Also, it's not just Android that has this problem. It's a problem on every app store. It's just a matter of how difficult it is to get to the package file (for example, I don't believe it's very easy on iPhones, but it's still possible).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Earlz
  • 62,085
  • 98
  • 303
  • 499
  • Unfortunately if one hack the client(the app) they would be able to see communication format and create their own server :( – Jocky Doe Jan 10 '18 at 00:33
3

It’s not possible to completely avoid reverse engineering, but by making them more complex internally, you could make it more difficult for attackers to see the clear operation of the app, which may reduce the number of attack vectors.

If the application handles highly sensitive data, various techniques exist which can increase the complexity of reverse engineering your code. One technique is to use C/C++ to limit easy runtime manipulation by the attacker. There are ample C and C++ libraries that are very mature and easy to integrate with and Android offers JNI.

An attacker must first circumvent the debugging restrictions in order to attack the application on a low level. This adds further complexity to an attack. Android applications should have android:debuggable=”false” set in the application manifest to prevent easy run time manipulation by an attacker or malware.

Trace Checking – An application can determine whether or not it is currently being traced by a debugger or other debugging tool. If being traced, the application can perform any number of possible attack response actions, such as discarding encryption keys to protect user data, notifying a server administrator, or other such type responses in an attempt to defend itself. This can be determined by checking the process status flags or using other techniques like comparing the return value of ptrace attach, checking the parent process, blacklist debuggers in the process list or comparing timestamps on different places of the program.

Optimizations - To hide advanced mathematical computations and other types of complex logic, utilizing compiler optimizations can help obfuscate the object code so that it cannot easily be disassembled by an attacker, making it more difficult for an attacker to gain an understanding of the particular code. In Android this can more easily be achieved by utilizing natively compiled libraries with the NDK. In addition, using an LLVM Obfuscator or any protector SDK will provide better machine code obfuscation.

Stripping binaries – Stripping native binaries is an effective way to increase the amount of time and skill level required of an attacker in order to view the makeup of your application’s low level functions. By stripping a binary, the symbol table of the binary is stripped, so that an attacker cannot easily debug or reverse engineer an application.You can refer techniques used on GNU/Linux systems like sstriping or using UPX.

And at last you must be aware about obfuscation and tools like ProGuard.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Sanket Prabhu
  • 2,232
  • 3
  • 20
  • 33
  • Where did you plagiarise this from? From blog post *[Increase Code Complexity and Use Obfuscation](https://books.nowsecure.com/secure-mobile-development/en/coding-practices/code-complexity-and-obfuscation.html)*? – Peter Mortensen Jul 22 '21 at 21:02
3

If your app is this sensitive then you should consider the payment processing part at the server side. Try to change your payment processing algorithms. Use an Android app only for collecting and displaying user information (i.e., account balance) and rather than processing payments within Java code, send this task to your server using a secure SSL protocol with encrypted parameters. Create a fully encrypted and secure API to communicate with your server.

Of course, it can also be hacked too and it has nothing to do with source code protection, but consider it another security layer to make it harder for hackers to trick your app.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Muhammad Saqib
  • 2,185
  • 3
  • 35
  • 48
3

100% security of the source code and resources is not possible in Android. But, you can make it little bit difficult for the reverse engineer. You can find more details on this in below links:

Visit Saving constant values securely and Mobile App Security Best Practices for App Developers.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Rajiv Ranjan
  • 333
  • 4
  • 13
2

Aren't Trusted Platform Module (TPM) chips supposed to manage protected code for you?

They are becoming common on PCs (especially Apple ones) and they may already exist in today's smartphone chips. Unfortunately, there isn't any OS API to make use of it yet. Hopefully, Android will add support for this one day. That's also the key to clean content DRM (which Google is working on for WebM).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
robUx4
  • 853
  • 9
  • 13
2

Nothing is secure when you put it on end-users hand but some common practice may make this harder for attacker to steal data.

  • Put your main logic (algorithms) on the server side.
  • Communicate with the server and client; make sure communication between server and client is secured via SSL or HTTPS; or use other techniques for key-pair generation algorithms (ECC and RSA). Ensure that sensitive information is remain end-to-end encrypted.
  • Use sessions and expire them after a specific time interval.
  • Encrypt resources and fetch them from the server on demand.
  • Or you can make a hybrid app which access system via webview protect resource + code on server

Multiple approaches; this is obvious you have to sacrifice among performance and security.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
mumair
  • 2,768
  • 30
  • 39
2

Tool: Using ProGuard in your application, it can be restricted to reverse engineering your application

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Mayank Nema
  • 223
  • 2
  • 7
2

I can see that there are good answers for this question. In addition to that, you can use Facebook ReDex to optimize the code. ReDex works on the .dex level where ProGuard works as .class level.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Asthme
  • 5,163
  • 6
  • 47
  • 65
2

I know some banking apps are using DexGuard which provides obfuscation as well as encryption of classes, strings, assets, resource files and native libraries.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Hunter
  • 3,080
  • 20
  • 23
1

How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?

An APK file is protected with the SHA-1 algorithm. You can see some files in the META-INF folder of APK. If you extract any APK file and change any of its content and zip it again and when you run that new APK file on an Android machine, it will not work, because the SHA-1 hashes will never match.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Jeegar Patel
  • 26,264
  • 51
  • 149
  • 222
  • This is true; but it's trivial to resign the APK (with a different certificate) and everything will work again. It's possible to check which signature has been used to sign the APK from within the application itself, and error out if the certificate changes, but it's only slightly less trivial to edit this code out of the application. – David Given Apr 24 '13 at 10:52
  • This may prevent the android device from running modified code, but you can still easily extract the relevant code and write new code on a PC which does what you want. – Sarel Botha Feb 13 '14 at 18:01
0

when they have the app on their phone, they have full access to memory of it. so if u want to prevent it from being hacked, you could try to make it so that u cant just get the static memory address directly by using a debugger. they could do a stack buffer overflow if they have somewhere to write and they have a limit. so try to make it so when they write something, if u have to have a limit, if they send in more chars than limit, if (input > limit) then ignore, so they cant put assembly code there.

user3742860
  • 100
  • 2
  • 13
-1

Just an addition to already good answers above.

Another trick I know is to store valuable codes as Java Library. Then set that Library to be your Android Project. Would be good as C .so file but Android Lib would do.

This way these valuable codes stored on Android Library won't be visible after decompiling.

stuckedunderflow
  • 3,551
  • 8
  • 46
  • 63
-3

Basically, there are five methods to protect your APK file:

  1. Isolate Java Program,
  2. Encrypt Class Files,
  3. Convert to Native Codes,
  4. Code Obfuscation
  5. Online Encryption

I suggest you use online encryption because it is safe and convenient. You needn't spend too much time to achieve this.

Such as APK Protect. It is an online encryption website for APK. It provides Java code and C++ code protection to achieve anti-debugging and decompile effects. The operation process is simple and easy.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • 9
    He's asking if it's possible. You're saying yes, it's easy. You're wrong. It's impossible. –  Jan 12 '14 at 22:54
-5

Nope, can't be done!

Your three questions circle around 100% protecting an app from being read. It just can't be done, by principle. And the more you invest in trying to do it, the worse experience it will be for you and, eventually, for whichever machine is trying to just read your app. Think of how slower HTTPS intrinsically is from HTTP, because of the security layer and maths required to be processed. The more layers, the slower it'll be for someone to unpack it, but never impossible, given you actually want it to be read, thus why it's made into a package and delivered.

A simple analogy is giving any given concealed object to someone. If that person can see what's inside, so can they take a picture and do something exactly like it. more so, in case of the code, someone dedicated enough can create an exact replica of that object, even if using a completely different process.

Fake security sense

As a processing app you shouldn't care for whatever security you think you can create in your binary code, for the integrity of the whole system. Assume anything that comes from the client can quickly be unreliable. Keep the app simple, smooth and fast. And, instead, worry about your server. Make a strict communication protocol to easily monitor the server, for instance. That's the only thing we can rely on.

Now, stick with me with this other idea on how to improve the server-side...

Mouth on the money

I am developing a payment processing app

Google have been very successful in avoiding malicious hackers in general by using a simple financial method to "protect" Google Chrome, and I quote:

We have a standing $50,000 reward for participants that can compromise a Chromebook or Chromebox with device persistence in guest mode

Our best bet to actually get closer to 100% "security" is picking the right fight for our money's worth. Perhaps most people won't be able to offer a 50k reward, but even a 1k reward can go a long way, and it's also much cheaper than investing that money into engineering any kind of bug catcher.

And investing in artificial intelligence to identify patterns in money flow to predict potential risks and find small leakages can also be much cheaper than trying to prevent both through whatever engineering.

Obvious exceptions

Granted, that won't protect us from "lunatics" and "lucky pranksters"... but nothing will. Meanwhile, when properly set, the latter group will only enjoy little time inside, while the system readjusts. And a lunatic we'd only need to worry in case it goes big enough to have a nemesis. And that would make a great story, anyway! :)

Too long; didn't read;

In other words, perhaps a better question to ask to yourself, instead of "how to avoid reverse engineering in my app" could be "how to engineer a more secure payment processing system" and focus on what you're actually trying to achieve: a secure system.

Long ago, I've tried writing more about all the above, to answer questions such as why I'm putting "security" and "protect" in quotes (what do they even really mean?).

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
cregox
  • 17,674
  • 15
  • 85
  • 116
  • "_In other words, instead of asking "how to avoid reverse engineering" try asking "how to engineer a bullet proof payment processing app"._" - I don't think that's good advice, especially if you are suggesting to ask on StackOverflow. – Sebi Nov 16 '19 at 20:21
  • @Sebi perhaps "system" there would fit better than "app". but, yeah, i can see now how confusing that sentence can sound in any case. what i still can't see is why people downvoted this answer so much, without any comment. – cregox Nov 16 '19 at 20:28
  • 2
    Just for the record, I didn't downvote this answer. I think people may have downvoted because the answer advises to spend money to get the answers - in my opinion it may be good complementary help if you are willing to invest. I definitely agree with your last paragraph, and I think security.stackexchange.com may be a good place to ask about overall setup. – Sebi Nov 17 '19 at 16:02
  • ah, ok... now i get it, lol. yeah, i wasn't even considering that interpretation! i'll try and edit it out to see what happens... thanks man. :D – cregox Nov 21 '19 at 14:21
  • it's now -3, I think it was -5 before. don't remember or really care... my attention was drawn here, this time, due to some approved editing into common case rather than my current favourite lower case style #locaws (more about it at cregox.net), which I just rejected to keep my identity on the post (and led me to an annoying spin off quest https://chat.stackoverflow.com/transcript/message/48255477#48255477). like in that case, i'm commenting here for no good reason other than reporting. cheers! – cregox Jan 06 '20 at 08:39
  • lol @peter for "active reading". i'm out. – cregox Jul 25 '21 at 07:10