Toxic Plankton feeds on Android Market for two months
Google never said it wouldn't
The security of Google Android has once again been called into question after an academic researcher discovered 12 malicious apps hosted in the operating system's official applications market, some that had been hosted there for months and racked up hundreds of thousands of downloads.
Ten of the apps reported last week by North Carolina State University professor Xuxian Jiang contained highly stealthy code that collected users' browsing history, bookmarks, and device information and sent them to servers under the control of the attackers. The professor said they also contained a backdoor largely made possible by a weakness documented at a security conference 12 months ago that allows Android apps to be surreptitiously updated.
The malicious titles also contained functions that allowed the developers to collect login credentials for Facebook, Gmail, and other accounts, although Jiang didn't find any evidence they were actively used. Carrying titles such as "Angry Birds Rio Unlock," the apps posed as legitimate programs. At least one of them was hosted in the Google-sponsored bazaar for more than two months and was downloaded more than 200,000 times, said Jiang, who added that they were yanked within hours of him alerting the company's security team.
One of the malicious apps Jiang found
Two additional apps contained code that racked up expensive phone bills by sending text messages to premium services.
A Google spokesman said there was no evidence the apps had been used to compromise any Google user accounts, but otherwise declined to discuss Jiang's findings. Instead, he offered what's becoming a standard response when malware is discovered in its software forum:
“We're aware of and have suspended a number of suspicious applications from Android Market,” he wrote in an email. “We remove apps and developer accounts that violate our policies.”
Jiang's discovery follows a separate rash of malicious apps, dubbed “DroidDream” that hit the Android Market two weeks ago. More than two dozen titles had to be pulled after third-party researchers reported them to Google. The trojans had been downloaded as many as 120,000 times.
Is there a policeman in the Market?
In most respects, Google leads the pack when it comes to policing the security of its users. Unlike Microsoft, which has admitted to attacks on Hotmail users only after they were disclosed by third parties, Google has proactively warned of attacks affecting users on multiple occasions.
It has also assembled a brain trust of some of the most respected security researchers in the world. Their work has gone a long way to developing a web browser, a stable of web-based applications, and other services whose security is second to none. Google has also shown leadership by being among the first to fortify its services with other useful security features, including a two-step verification process and automatic warnings of suspicious logins to a user's account.
Android is clearly an exception. The backdoor contained in the rogue applications discovered by Jiang adopted a technique that closely mimics the ”rootstrap” proof-of-concept exploit released in June 2010 by researcher Jon Oberheide. The apps actively exploited a significant omission in the Android security model that Google has shown no signs of fixing.
“This is something that's unique to Android because it doesn't have any sort of code-signing guarantees like the iPhone has,” Oberheide told The Register on Friday. “On iPhones, when you publish an app to the app market, Apple signs whatever code is distributed with the application that says you can only execute this code. You can't easily pull down new code over the internet and execute it.”
The apps discovered by Jiang were under no such restrictions, making it easy for them to pull down new code at any time that greatly expanded their capabilities as long as they operated within the same permissions the user gave when they were first installed. As a result, apps that look safe at time of download can lurk on a phone for months or years and only later pull down new code that vastly changes their behavior.
“I tried to put pressure on Google a year ago by publishing this rootstrapping stuff, saying you need to be doing similar code signing as Apple," Oberheide continued. "This would at least provide some guarantee about the code the application is going to execute."
To be sure, code signing isn't a silver bullet that completely deters apps from downloading new code and executing it at run time. Apps running on Apple's iOS theoretically could do the same thing by sneaking what's known as an interpreter into a rogue app, or by adopting a tedious developer process known as ROP, or return oriented programming. Almost no security researcher would disagree, however, that code signing significantly raises the bar to such attacks.
Code signing also helps prevent or lessen the effects of entire classes of exploits, such as those that corrupt memory.
In an email, the Google spokesman responded: "Code signing, as discussed in various public forums, does not guarantee that a malicious application cannot run untrusted code. Regardless of the platform, it doesn't prevent an application from executing code from the Internet."
The adoption of Oberheide's rootstrapping technique is a sign that real-world criminals have taken notice of Android's lack of code signing and are beginning to exploit it. By combining it with Google's failure to vet the security of apps hosted in the Android Market, the company's mobile OS is perhaps the weakest link in a security chain that otherwise is among the strongest in the industry.
The rogue apps' backdoor worked by periodically querying a server for executable files that run under Android's Dalvik virtual machine. With no code signing in Android, the files could bypass standard techniques used to detect malicious code, giving the attackers an easy way to push new payloads to compromised handsets. This capability could prove especially useful in exploiting vulnerabilities that are discovered months or years after the rogue application was installed.
Jiang has more details here about the malware, which is dubbed Plankton.
But wait, there's more
Add to all of this Google's own admission that more than 90 percent of Android users are running older versions of the mobile operating system that contain serious kernel vulnerabilities. That gives attackers an easy way to bypass Android's security sandbox that's supposed to limit the data and resources each app is allowed to access.
Then remember that Google makes no promises to vet the security of apps hosted in its own store, and it's easy to see why users have good reason to be wary of the platform.
“The ball is in Google's court here,” Oberheide said. “They need to harden the platform a bunch. They have their hands full now.” ®
"Angry Birds Rio Unlock" sounds like a legitimate program?
Updates should be part of the Android/Google agreement
Personally, I thing if you buy a Google approved phone, i.e. one that comes with the Market place and other Google apps, there should be something in the agreement between the phone maker and Google to garantee updates for a period of time.
i.e. a phone manufacture:
* Must provide all Android OS updates for up to 24 months after the launch date of the phone (as long as the hardware can cope with the update of course), and...
* Must provide those updates OTA and within 1 month of the update being release by Google themselves.
If they don't agree to this, then they should not allowed to have access to the Google Market etc.
The fix is pretty simple
...and has been an outstanding request on the Android Issues site for years: We need to be able to fake permissions.
That is, when some application asks for permission to send SMS we currently have a boolean choice: refuse, and do without the app; allow, and hope it doesn’t bankrupt you.
What has been suggested (many times) is there should be a third choice – safe mode. The app *thinks* it has permission to send SMS messages (for example), but they don’t go anywhere.
An app that wants to read your contacts would, if that permission were ‘safe’, get an empty list. One that wanted to access the internet would get a 404. One that tried to use the camera would get a black rectangle, etc.
There’s no good reason for not having such a system – claiming that users are too dim to understand it is no excuse for exposing them to the vulnerability otherwise and force on them the impossible question of “the App wants these permissions, do you want to install it?”.
We’re in the ridiculous situation of some developers listing what they require permissions for – which just changes the question from “do I trust the app?” to the rather more realistic “so I trust the developer?”.
‘Safe mode’ is the obvious solution. (Choose ‘safe’ when installing an App and all permissions are set to safe mode, one can then elevate individual permissions for an App under the Manage Applications setting screen).