That malware with its own backdoor into Android's framework? Don't worry; Google's on it. (Gulp!)

Google confirmed that cyberthieves had managed to pre-install malware into the Android framework backdoor. In short, the malware appeared to be blessed by Google at the deepest point within Android.

binary code, magnifying lens, skull and crossbones
Thinkstock

One of mobile security's biggest fears has come to pass. Google last week (June 6) confirmed that cyberthieves had managed to pre-install malware into the Android framework backdoor. In short, the malware appeared to be blessed by Google at the deepest point within Android.

"In the Google Play app context, installation meant that [the malware] didn’t have to turn on installation from unknown sources and all app installs looked like they were from Google Play," wrote Lukasz Siewierski, of the Android security and privacy team, in a blog post. "The apps were downloaded from the C&C server and the communication with the C&C was encrypted using the same custom encryption routine using double XOR and zip. The downloaded and installed apps used the package names of unpopular apps available on Google Play. They didn’t have any relation to the apps on Google Play apart from the same package name."

Enterprise CISOs and CSOs, along with CIOs, are discovering that trusting the major mobile operating system companies today — Apple and Google — to handle their end of security protections is foolhardy. Due to the nature of the Apple ecosystem (a total of one handset maker, which allows for a much more closed system), iOS is slightly more secure, but only slightly.

Still, Google's new admission certainly makes Apple look a little better in the security area. The issue isn't with the operating systems per se — both iOS and Android have reasonably secure code. It's with apps offered to enterprises and consumers through the officially sanctioned app depositories. Enterprise security pros already know that neither Apple nor Google does a heck of a lot to validate the security of the apps. At best, both are checking for policy and copyright issues far more than the presence of malware.

But that's dealing with true third-party apps. Apps coming directly from Apple and Google can be trusted — or so was thought until Google's disclosure.

The incident that Google admitted happened some two years ago, and the blog post didn't say why Google didn't announce it at the time, or why it chose to now. It might be that Google wanted to make sure it had sufficiently closed this hole before announcing it, but two years is an awfully long time to know about this serious a hole and be silent about it.

So what actually happened? Google gets points for publishing lots of details. The background to Google's story begins a year earlier than this — so, three years ago —  with a series of spam ad-displaying apps called Triada.

"The main purpose of Triada apps was to install spam apps on a device that displays ads," Siewierski wrote. "The creators of Triada collected revenue from the ads displayed by the spam apps. The methods Triada used were complex and unusual for these types of apps. Triada apps started as rooting trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor."

Siewierski then detailed the app's methodology: "Triada’s first action was to install a type of superuser (su) binary file. This su binary allowed other apps on the device to use root permissions. The su binary used by Triada required a password, so was unique compared to regular su binary files common with other Linux systems. The binary accepted two passwords: od2gf04pd9 and ac32dorbdq. Depending on which one was provided, the binary either ran the command given as an argument as root or concatenated all of the arguments, ran that concatenation preceded by sh, then ran them as root. Either way, the app had to know the correct password to run the command as root."

The app used an impressively sophisticated system to free up the space it needed, but avoiding — to the extent it could — deleting anything that would alert IT or the consumer to a problem. "Weight watching included several steps and attempted to free up space on the device’s user partition and system partition. Using a blacklist and whitelist of apps, it first removed all the apps on its blacklist. If more free space was required, it would remove all other apps leaving only the apps on the whitelist. This process freed space while ensuring the apps needed for the phone to function properly were not removed." He also noted that "in addition to installing apps that display ads, Triada injected code into four web browsers: AOSP (com.android.browser), 360 Secure (com.qihoo.browser), Cheetah (com.ijinshan.browser_fast) and Oupeng (com.oupeng.browser)."

At that point, Siewierski wrote, Google detected the malware efforts and was able to remove Triada samples using Google Play Protect and tried to thwart Triada in other ways. That's when Triada fought back, around the summer of 2017. "Instead of rooting the device to obtain elevating privileges, Triada evolved to become a pre-installed Android framework backdoor. The changes to Triada included an additional call in the Android framework log function. By backdooring the log function, the additional code executes every time the log method is called. That is, every time any app on the phone tries to log something. These log attempts happen many times per second, so the additional code [was] running non-stop. The additional code also executes in the context of the app logging a message, so Triada can execute code in any app context. The code injection framework in early versions of Triada worked on Android releases prior to Marshmallow. The main purpose of the backdoor function was to execute code in another app’s context. The backdoor attempts to execute additional code every time the app needs to log something."

The malware then got creative about finding ways to avoid — or to at least delay — detection.

"Each MMD file had a specific file name of the format <MD5 of the process name>36.jmd. By using the MD5 of the process name, the Triada authors tried to obscure the injection target. However, the pool of all available process names is fairly small, so this hash was easily reversible. We identified two code injection targets: com.android.systemui (the System UI app) and com.android.vending (the Google Play app). The first target was injected to get the GET_REAL_TASKS permission. This is a signature-level permission, which means that it can’t be held by ordinary Android apps. Starting with Android Lollipop, the getRecentTasks() method is deprecated to protect users' privacy. However, apps holding the GET_REAL_TASKS permission can get the result of this method call. To hold the GET_REAL_TASKS permission, an app has to be signed with a specific certificate, the device’s platform cert, which is held by the OEM. Triada didn’t have access to this cert. Instead it executed additional code in the System UI app, which has the GET_REAL_TASKS permission."

The malware had one more trick up its evil sleeve. "The last piece of the puzzle was the way the backdoor in the log function communicated with the installed apps. This communication prompted the investigation: the change in Triada made it appear that there was another component on the system image. The apps could communicate with the Triada backdoor by logging a line with a specific predefined tag and message. The reverse communication was more complicated. The backdoor used Java properties to relay a message to the app. These properties were key-value pairs similar to Android system properties, but they were scoped to a specific process. Setting one of these properties in one app context ensures that other apps won’t see this property. Despite that, some versions of Triada indiscriminately created the properties in every single app process."

At the end of the post — which has a lot more code included and is worth a thorough read — Google offers some thoughts on next steps. Look carefully at its suggestions and see if you can detect who seems to emerge blameless from all of this? From Google's suggestions: "OEMs should ensure that all third-party code is reviewed and can be tracked to its source. Additionally, any functionality added to the system image should only support requested features. It’s a good practice to perform a security review of a system image after adding third-party code. Triada was inconspicuously included in the system image as third-party code for additional features requested by the OEMs. This highlights the need for thorough ongoing security reviews of system images before the device is sold to the users as well as any time they get updated over-the-air (OTA)."

That's fair, but who exactly is supposed to be doing these ongoing security reviews? Surely, Google isn't suggesting that something so important should be left in the hands of OEMs unchecked. I conclude that Google will be adding extensive resources to its own security teams, to make sure that nothing such as this happens gets through the OEM checkpoints.

There is an issue of trusting Google — and Apple — when it comes to making sure that mobile operating systems and the associated apps are secure. OEMs have very little ROI to justify big security investments. The buck must top with Google. I don't seem to recall BlackBerry having too many of these kinds of issues, and that was because, as a company, it prioritized security. (OK, perhaps it should have spared a bit of that priority for marketing, but I digress.)

If Google doesn't do more for security, CIOs/CISOs/CSOs are going to either have to take on this task themselves — or seriously question which MOS they can justify supporting.

Related:

Copyright © 2019 IDG Communications, Inc.

Where does this document go — OneDrive for Business or SharePoint?
  
Shop Tech Products at Amazon