This article first appeared in IEEE Security & Privacy magazine and is brought to you by InfoQ & IEEE Computer Society.
When I was growing up, phones were phones. You could call other phones and talk to people; they could call you. That was it. With the spread of smartphones, things have certainly changed. Today’s smartphones have more in common with computers than with the phones we had just a few years ago. In fact, smartphones are simply computers with extra hardware—namely, a GSM (Global System for Mobile Communications) radio and a baseband processor to control it. These extra features are great, but with the power they provide, there’s also a threat. Today, smartphones are becoming targets of attackers in the same way PCs have been for many years. Here, I focus on the security models of two smart phone operating systems: Apple’s iOS and Google’s Android. These two have a special place in my heart because I was the first to publicly exploit both of them.
Device security has many aspects. For brevity, I’ll put aside topics such as encryption, locking, and privacy and focus on what attackers really want: running their code on your device. Just as in the PC world, attackers can get remote code to run on a mobile device in two ways. The first is to get users to download, install, and run their software—that is, malware. The other is to attack the device by exploiting software vulnerabilities—that is, drive-by downloads. I’ll look at how iOS and Android try to prevent these two events.
Mobile Malware
iOS and Android both offer a public marketplace—respectively, the App Store and the Android Market—but take dramatically different approaches to limit malware on their devices.
iOS
In typical Apple fashion, the App Store is tightly controlled from the top down. Apple must approve an application before it can be in the App Store. Apple enforces this on the device through code signing. iPhones won’t run an application or load a library unless it’s signed by Apple’s private encryption key. No one besides Apple knows exactly how closely it reviews iOS apps. As a professional code auditor, I know Apple can’t be reviewing them all that carefully owing to the number of apps, but any kind of review will eliminate the most obvious malware. If a piece of malware did slip through the review and make it to the App Store, and people found out about it, Apple could remove it from the App Store and remotely remove it from devices on which it was installed.
You can argue about the App Store being bad for developers, but it’s an effective barrier to malware, perhaps only accidentally.
Once on the device, apps run in a sandbox that limits their actions. For example, one app can’t read another’s data. No app can read the stored SMS (short message service) messages, and so on. Because all apps share the same sandbox rules, they’re all allowed any action any app could ever need. For example, all apps can freely access the Internet and Address Book.[1]
Android
Developers can directly place their apps on the Android Market, and there’s no review of the apps before they arrive there. Android phones require applications to be signed, but they can be self-signed. So, Google uses these signatures for bookkeeping, not to control what code can run. Because of this, Android users can download apps from anywhere, not just the Android Market.
Instead of using a top-down approach to malware prevention, Android uses crowd sourcing. Users rate and comment on apps. They can see how many other users have downloaded an app and can report malicious apps to Google. If enough users complain about an app, Google will remove it from the Market and can remotely remove it from devices. A good tip for Android users is to never download an app without thousands of downloads and mostly positive comments. Another is to use only the Android Market. There have been a handful of malicious Android apps, but most of them were available only in markets other than the Android Market. The Android Market’s openness allows for easy use by developers but also allows ease of entry to malware developers.
Once the app is on the device, Android also uses a sandbox model. However, the Android sandbox is app-specific. During installation, apps inform the users about which permissions they need. Users can accept or reject these permissions. If they reject them, the apps won’t be installed. The good thing is that these sandboxes can be customized for each application instead of Apple’s one-size-fits-all approach. For instance, your Tetris game doesn’t need access to the Internet, so it won’t have that access. The bad thing is that this model forces the users to make the security decisions, which history has shown isn’t a good choice. Furthermore, users wanted those apps in the first place, or they wouldn’t be trying to install them. So, they’ll be inclined to just click through the screens.
Exploiting Vulnerabilities
Of course, attackers can just try to bypass the devices’ installation and review processes and exploit them directly. A mobile device’s attack surface is pretty similar to that of a PC; the easiest targets are applications such as Web browsers and email clients. Typically, the attack surface is smaller on mobile devices because there’s less code to attack. For instance, you don’t find Java or Flash in mobile browsers, but they’re quite common (and common exploitation targets) in desktop browsers.
However, smartphones offer two avenues of attack unavailable with PCs. One is SMS message processing. Collin Mulliner and I showed how to exploit a vulnerability in the iPhone’s SMS message parser to get control of the device.[2] Intrepidus Group researchers did the same thing against a Palm Pre.[3] The other avenue of attack—the GSM radio—has only recently been explored. Ralf - Philipp Weinmann showed how to use GSM software flaws to take over phones’ baseband processor.[4]
iOS
Having a software vulnerability is one thing; writing an exploit for it is another. Consider iOS, which uses a layered approach to prevent exploitation. iOS employs data execution prevention (DEP) and address space layout randomization (ASLR). DEP makes exploitation difficult by distinguishing between data and code. In this way, an attacker can’t supply data to a process and jump to that process to execute the data. The typical way to bypass DEP is to use return-oriented programming (ROP). However, ROP doesn’t work in the presence of ASLR because the attacker can’t find the code to reuse. So, taking a code execution vulnerability and getting a functional exploit from it will be difficult.
That was just the first defense layer. If an attacker can get code running in a process by way of an exploit, iOS has many restrictions that will limit the damage the attacker can do. For example, the code will be running in a sandbox. The attacker won’t be able to do things such as send or receive SMS messages. In addition, the code will be running as the less privileged user “mobile” rather than at the root level. Finally, the attacker won’t be able to install and run any software or tools on the device. Attackers generally want to upload keyboard sniffers or other attack tools, but the code-signing requirements will make this impossible. This, combined with the fact that iOS doesn’t even come with a shell or other useful utilities, means that attackers will have to do all their work in the exploited process and won’t have persistence on the device.
Of course, no defense is perfect, and the layered iOS security model has been broken at least a couple of times. The first was the SMS attack I mentioned earlier. It turns out the process that deals with incoming SMS messages runs at the root level and not in a sandbox. The other was a website called jailbreakme.com, which contained two exploits chained together. The first was a code execution exploit for the MobileSafari browser. This exploit’s payload contained the second exploit, which gave root-level privileges that proceeded to disable code signing and download and install the real payload, which would jailbreak the device. (For more on jailbreaking, see the sidebar.) So, these examples broke the defense layers, but the attacks had to be that much more sophisticated to work.
Android
Android sandboxes all the relevant applications, such as the Web browser, to restrict damage from attackers. This will require attackers to have two exploits, such as I outlined for iOS: one to get code running and one to break out of the sandbox.[5] One more feature that helps protect Android is that many Android apps are written in Java, which is mostly immune from memory corruption vulnerabilities. Collin and I found an SMS bug in Android that was similar to the one that defeated the iOS model.[6] But on Android, it was in a Java application and thus wasn’t exploitable.
However, one big drawback is that Android doesn’t utilize ASLR or DEP. This makes constructing exploits much easier than for iOS and Windows Phone 7, which both feature these technologies. Other smartphones such as the Palm Pre and Blackberry also lack ASLR and DEP. iOS, too, lacked DEP for the first year and added ASLR only this year. Hopefully, Android will soon follow suit.
Smartphones are becoming increasingly useful tools in everyday life. No one is ever lost or out of contact any longer. From a security perspective, these devices are typically more locked down than PCs and feature additional security measures such as sandboxing and code signing. However, because mobile devices store personal information, they’re attractive targets. Nevertheless, at this point in time, you’re less likely to lose personal data because of malware or drive-by downloads than if you had left your phone in a cab or at the local pub.
Jailbreaking
Jailbreaking disables code signing on iPhones to run apps not from the App Store. This breaks almost all the protections iOS offers. First, it disables code signing, which opens the platform up to malware. In addition, many of the added nonsigned applications run at the root level without a sandbox. The jailbreaking patches also somewhat disable data execution prevention by allowing writable and executable memory, which isn’t normally in iOS. So, the openness that jailbreaking offers also introduces potential security problems.
About the Author
Charlie Miller is a computer security researcher at Accuvant Labs. Contact him at charlie.miller@accuvant.com.
IEEE Security & Privacy's primary objective is to stimulate and track advances in security, privacy, and dependability and present these advances in a form that can be useful to a broad cross-section of the professional community -- ranging from academic researchers to industry practitioners.
[1] N. Seriot, “iPhone Privacy,” presentation at Black Hat DC 2010
[2] C. Mulliner and C. Miller, “Fuzzing the Phone in Your Phone,” presentation at Black Hat 2009
[3] “WebOSL: Examples of SMS Delivered Injection Flaws,” Insight, 16 Apr. 2010
[4] R.P. Weinmann, “All Your Baseband Are Belong to Us”
[5] B. Alberts and M. Oldani, “Beating Up on Android”
[6] “CVE 2009-2999” Mitre, 2011