1

Is it possible for other apps to read my app's signature toString() format? Because I'm using my app's signature as the first core security of my app. Is it possible to crack? What else can i use as a password of my app that can't be seen even when decompiled? That my app will get void when that thing has changed or my app gets decompiled?

  • 1
    Yes, signature information is meant to be read to verify signing author. If you open an APK file as a Zip file, you can find `CERT.RSA` file in `META-INF` folder. Running `keytool` command on this file can give you signing information. Someone cannot resign it with your signing key unless they have your keystore and credentials. – rupinderjeet Oct 03 '21 at 09:26

1 Answers1

0

Your Mobile App APK is Public: You Don't Control It Any-more

Is it possible for other apps to read my app's signature toString() format?

From the moment you ship your mobile app to the public you loose the control of it and anything on it it's not a secret any more because a lot of open source tools exist to help reverse engineer it easily, for example MobSF - Mobile Security Framework:

Mobile Security Framework is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing framework capable of performing static analysis, dynamic analysis, malware analysis and web API testing.

Instrumentations frameworks are also often used by attackers to perform dynamic analysis at runtime, for example Frida:

Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.

If your mobile app relies only on its own signature to prevent modifications to it at runtime then it will fail to be effective to defend against attackers, but may prevent abuse from script kids. All an attacker needs to do is to statically reverse engineer your mobile app, find and remove the protections you put in place, repackage it and sign the app with it's own key.

Repackaging mobile apps and offer them in the official Google Play store or other alternative stores is more common then lots of developers may think, and a well known one that made the news was Pokémon Go:

Remember Pokémon Go, the location-based augmented reality mobile game from Niantic Labs that became an overnight global sensation when it launched in 2016? Well, the game has had a record 2019 having surpassed its launch year in revenues, announced a live AR multiplayer feature, and, on a slightly dissonant note, sued an “association of hackers” for creating and distributing unauthorized derivative versions of the company’s mobile apps.

The staggered release of Pokémon Go meant that obsessive fans in as-yet unserved regions gravitated to repackaged apps. The relatively innocuous of these apps merely contained modifications to by-pass in-app controls. However, the sheer demand for these apps also meant that unscrupulous actors were now able to get players to download versions that had either been injected with Trojans and adware or, worse still, completely repackaged malicious apps with no Pokémon code whatsoever. These repackaged apps, however, do not automatically spawn any Pokémon in regions where the official app is yet to be launched. To get around this, players had to hack API communications to spoof locations. At scale, the proliferation of repackaged apps and API abuse opens up a new vector for sophisticated DDoS attacks.

As you can see the use of in app controls can be bypassed and repackaged apps are offered and users end-up to install and use them without knowing or knowing that is not indeed the original app, but they don't mind once it's bypassing some controls that benefits the user.

So, am I saying that you shouldn't use in app controls? In my opinion you should use as many layers of defence as you can and afford in order to make the life of attackers as hard as possible, but bear in mind that if you don't combine this in-app defences with a mechanism that lets the API backend for your mobile app to know when request are from the genuine and untampered version of your mobile app, then you will end-up to fail on your efforts to thwart the attacks of the most de-terminated attackers.

Reverse Engineering Techniques

Because I'm using my app's signature as the first core security of my app. Is it possible to crack?

Yes, it can be done using static analyses via decompilation of the mobile app and/or with dynamic analyses.

An attacker may first decompile your mobile app in order to inspect the code and understand how it works and if necessary also use an instrumentation framework to to see at runtime how it behaves and how it could be bypassed in real-time.

You can learn how to use MobSF for static reverse engineering in this article I wrote:

The range of open source tools available for reverse engineering is huge, and we really can't scratch the surface of this topic in this article, but instead we will focus in using the Mobile Security Framework(MobSF) to demonstrate how to reverse engineer the APK of our mobile app. MobSF is a collection of open source tools that present their results in an attractive dashboard, but the same tools used under the hood within MobSF and elsewhere can be used individually to achieve the same results.

During this article we will use the Android Hide Secrets research repository that is a dummy mobile app with API keys hidden using several different techniques.

For an example on how to use an instrumentation framework at runtime you can see the article I wrote about How to Bypass Certificate Pinning with Frida on an Android App to learn how to use it:

Today I will show how to use the Frida instrumentation framework to hook into the mobile app at runtime and instrument the code in order to perform a successful MitM attack even when the mobile app has implemented certificate pinning.

Bypassing certificate pinning is not too hard, just a little laborious, and allows an attacker to understand in detail how a mobile app communicates with its API, and then use that same knowledge to automate attacks or build other services around it.

Possible Solutions

What else can i use as a password of my app that can't be seen even when decompiled? That my app will get void when that thing has changed or my app gets decompiled?

You can make it more difficult but you cannot make it impossible for being decompiled and reverse engineered. For example, you can use the built-in Proguard or R8 compiler on Android Studio:

When enabling shrinking, you also benefit from obfuscation, which shortens the names of your app’s classes and members, and optimization, which applies more aggressive strategies to further reduce the size of your app. Obfuscation: shortens the name of classes and members, which results in reduced DEX file sizes. To learn more, go to the section about how to obfuscate your code.

A possible better solution is to shift the decision to the API backend of the mobile app, provided it works with one. The API backend will be the one deciding when to provide or not the necessary data for the app to work, but for that the backend needs to know if the requests are from what it expects, a genuine and untampered version of your mobile app.

So, if your mobile app works with an API backend then I recommend you to read this answer I gave to the question How to secure an API REST for mobile app?, especially the sections Hardening and Shielding the Mobile App, Securing the API Server and A Possible Better Solution where you will learn that the use of a Mobile App Attestation solution allows for the API backend to know when the request is indeed from your genuine and untampered mobile app, thus being able to deny the ones that come from tampered or repackaged versions of your mobile app, therefore rendering them useless due to the lack of data to work with.

Do You Want To Go The Extra Mile?

In any response to a security question I always like to reference the excellent work from the OWASP foundation.

For Mobile Apps

OWASP Mobile Security Project - Top 10 risks

The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.

OWASP - Mobile Security Testing Guide:

The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.

For APIS

OWASP API Security Top 10

The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.

Exadra37
  • 11,244
  • 3
  • 43
  • 57
  • Welcome to the community :). We are here to share knowledge and to help each other. When you find an answer informative and helpful you can always upvote it. If you find it to be the correct answer to your question then you should mark it as the accepted answer. – Exadra37 Oct 04 '21 at 12:00
  • Sorry I can't upvote because I'm new here and It says i need 15 reputations to do it – LIGHTNING RED DRAGON GAMING Oct 04 '21 at 12:12
  • No problem. You will eventually get there :) – Exadra37 Oct 04 '21 at 12:31