Does Security by Obscurity Work?

No, it doesn't. This is what common security wisdom says and I belonged to that school of thought for most of my security expert life. That said, in the 2000s I did my fair share of malware analysis, and even as a strong believer in the above claim, I couldn’t deny that some of the defenses the malware authors had come up with were at least mildly annoying. To be honest, analyzing these things was probably some of the most challenging work of my career.

During the last couple of years, my team and I started re-encountering anti-RE in our daily work – only this time it wasn’t malware or DRM (not that there's a big difference between the two). Instead, it now was regular apps that refused to be reverse engineered. In the mobile world security by obscurity was becoming a thing.

This rise of obfuscation and anti-tampering in run-of-the-mill mobile apps is causing some confusion amongst security testers - this became clear to me when I started working on the OWASP Mobile Security Testing Guide earlier this year. Terminology such as “vulnerability to reverse engineering attack” was floating around. It wasn’t clear how, if at all, mobile apps were required to impede reverse engineering. Is it a vulnerability not to prevent reverse engineering? Should apps that store sensitive data refuse to run on rooted devices? Should symbols always be stripped? How much obfuscation is enough?

For security folks, the rise of software protections in run-of-the-mill mobile apps causes issues as well. When security by obscurity is an integral part of an app’s architecture, it becomes more difficult to assess the security of the app. Furthermore, black-box tests of apps that are heavily protected can be demanding. How do you test something when it won't let you look inside and tamper with it?

Perhaps we should not ask whether security by obscurity works or not (boolean). In fact, we already know that perfect protection is most likely impossible. Rather, the important question might be how well a given set of reverse engineering defenses achieves its goal: Raising the bar for static / dynamic analysis so that adversaries are deterred to a certain degree.

This is the context of Vantage Point's research program “Devising a process and method to categorize and score reverse engineering resiliency”. In the past two years, we've been spending significant research time on mapping the state-of-the-art in mobile reserve engineering defenses, and we're now starting to share some of that research with the world. Vincent Tan has shown how to break BYOD security solutions at BlackHat. Myself, I've been busy hacking various mobile token apps - a paper documenting my journey will be released on August 25th on HITB GSEC Singapore.

This upcoming publication offers a technical look at the attacker’s side: What processes and tools can an adversary use to clone mobile tokens despite the reverse engineering defenses added? What kinds of defenses are commonly used, and how effective are they? I'll show that attacks are possible even on heavily protected apps, even though the required effort is sometimes ridiculously high. All of this is a technical precursor to our work on assessing reverse engineering resiliency, which is scheduled for release later this year.

In the course of my research, I'll demonstrate how to devise "cloning" tools for popular mobile OTP tokens by bypassing their reverse engineering defenses and then using static & dynamic analysis to understand and reproduce their respective OTP generators. Token instances can then be replicated by copying the correct data from the (rooted) device. Check out the demo videos:

In my view, the research doesn't show vulnerabilities specific to RSA's or VASCO Data Security's products - rather, it emphasizes that perfect protection of user secrets is impossible when OS environment is compromised. Even so, higher grades of software protection result in higher reverse engineering effort - that much will be evident in my paper.

As for mitigating the demonstrated attacks, the best defense is securing the mobile token with a PIN. By requiring something the user knows, copying the data from the device becomes insufficient for replicating the token. Both VASCO DIGIPASS and RSA SecurId implement the PIN feature in a way that prevents the attack. 

VASCO Data Security and EMC have received the demonstration tools in advance and I'm including their response here as well:

  • RSA notes that the attack does not pose a risk for customers who follow Best Practices for RSA SecurID Software Tokens. These best practices recommend not installing or running SecurID Software Token on rooted or jail-broken devices. Also, it is recommended to configure a PIN or password as an additional factor when using the SecurID token. In RSA's view, there research does not show a new vulnerability or flaw specific to the RSA authentication technology (I agree on that - see above).
  • VASCO Data Security points out that my analysis focuses on demo apps, which are not protected in the same way as production apps. Specifically, production apps have addtional security features, and the OTP calculation happens in native code, making it harder to reverse engineer. VASCO Data Security also offered me the opportunity to analyze their production apps - which I didn’t take up, as I was already in the last stages of preparing the paper. Futhermore, DIGIPASS offers a PIN feature that prevents the attack (as the user's data is encrypted with the PIN).

For more information and technical details, check out the paper.

About the Author

Bernhard Mueller is a full-stack hacker, security researcher, and winner of BlackHat's Pwnie Award.

Follow him on Twitter: @muellerberndt