Google Security Blog

Introducing reCAPTCHA v3: the new way to stop bots

Google Security Blog - Mon, 10/29/2018 - 7:53pm
Posted by Wei Liu, Google Product Manager

[Cross-posted from the Google Webmaster Central Blog]

Today, we’re excited to introduce reCAPTCHA v3, our newest API that helps you detect abusive traffic on your website without user interaction. Instead of showing a CAPTCHA challenge, reCAPTCHA v3 returns a score so you can choose the most appropriate action for your website.

A frictionless user experience

Over the last decade, reCAPTCHA has continuously evolved its technology. In reCAPTCHA v1, every user was asked to pass a challenge by reading distorted text and typing into a box. To improve both user experience and security, we introduced reCAPTCHA v2 and began to use many other signals to determine whether a request came from a human or bot. This enabled reCAPTCHA challenges to move from a dominant to a secondary role in detecting abuse, letting about half of users pass with a single click. Now with reCAPTCHA v3, we are fundamentally changing how sites can test for human vs. bot activities by returning a score to tell you how suspicious an interaction is and eliminating the need to interrupt users with challenges at all. reCAPTCHA v3 runs adaptive risk analysis in the background to alert you of suspicious traffic while letting your human users enjoy a frictionless experience on your site.

More Accurate Bot Detection with "Actions"

In reCAPTCHA v3, we are introducing a new concept called “Action”—a tag that you can use to define the key steps of your user journey and enable reCAPTCHA to run its risk analysis in context. Since reCAPTCHA v3 doesn't interrupt users, we recommend adding reCAPTCHA v3 to multiple pages. In this way, the reCAPTCHA adaptive risk analysis engine can identify the pattern of attackers more accurately by looking at the activities across different pages on your website. In the reCAPTCHA admin console, you can get a full overview of reCAPTCHA score distribution and a breakdown for the stats of the top 10 actions on your site, to help you identify which exact pages are being targeted by bots and how suspicious the traffic was on those pages.
Fighting bots your way
Another big benefit that you’ll get from reCAPTCHA v3 is the flexibility to prevent spam and abuse in the way that best fits your website. Previously, the reCAPTCHA system mostly decided when and what CAPTCHAs to serve to users, leaving you with limited influence over your website’s user experience. Now, reCAPTCHA v3 will provide you with a score that tells you how suspicious an interaction is. There are three potential ways you can use the score. First, you can set a threshold that determines when a user is let through or when further verification needs to be done, for example, using two-factor authentication and phone verification. Second, you can combine the score with your own signals that reCAPTCHA can’t access—such as user profiles or transaction histories. Third, you can use the reCAPTCHA score as one of the signals to train your machine learning model to fight abuse. By providing you with these new ways to customize the actions that occur for different types of traffic, this new version lets you protect your site against bots and improve your user experience based on your website’s specific needs.

In short, reCAPTCHA v3 helps to protect your sites without user friction and gives you more power to decide what to do in risky situations. As always, we are working every day to stay ahead of attackers and keep the Internet easy and safe to use (except for bots).

Ready to get started with reCAPTCHA v3? Visit our developer site for more details.
Categories: Google Security Blog

Google tackles new ad fraud scheme

Google Security Blog - Tue, 10/23/2018 - 1:11pm
Posted by Per Bjorke, Product Manager, Ad Traffic Quality

Fighting invalid traffic is essential for the long-term sustainability of the digital advertising ecosystem. We have an extensive internal system to filter out invalid traffic – from simple filters to large-scale machine learning models – and we collaborate with advertisers, agencies, publishers, ad tech companies, research institutions, law enforcement and other third party organizations to identify potential threats. We take all reports of questionable activity seriously, and when we find invalid traffic, we act quickly to remove it from our systems.

Last week, BuzzFeed News provided us with information that helped us identify new aspects of an ad fraud operation across apps and websites that were monetizing with numerous ad platforms, including Google. While our internal systems had previously caught and blocked violating websites from our ad network, in the past week we also removed apps involved in the ad fraud scheme so they can no longer monetize with Google. Further, we have blacklisted additional apps and websites that are outside of our ad network, to ensure that advertisers using Display & Video 360 (formerly known as DoubleClick Bid Manager) do not buy any of this traffic. We are continuing to monitor this operation and will continue to take action if we find any additional invalid traffic.

While our analysis of the operation is ongoing, we estimate that the dollar value of impacted Google advertiser spend across the apps and websites involved in the operation is under $10 million. The majority of impacted advertiser spend was from invalid traffic on inventory from non-Google, third-party ad networks.

A technical overview of the ad fraud operation is included below.

Collaboration throughout our industry is critical in helping us to better detect, prevent, and disable these threats across the ecosystem. We want to thank BuzzFeed for sharing information that allowed us to take further action. This effort highlights the importance of collaborating with others to counter bad actors. Ad fraud is an industry-wide issue that no company can tackle alone. We remain committed to fighting invalid traffic and ad fraud threats such as this one, both to protect our advertisers, publishers, and users, as well as to protect the integrity of the broader digital advertising ecosystem.
Technical Detail
Google deploys comprehensive, state-of-the-art systems and procedures to combat ad fraud. We have made and continue to make considerable investments to protect our ad systems against invalid traffic.

As detailed above, we’ve identified, analyzed and blocked invalid traffic associated with this operation, both by removing apps and blacklisting websites. Our engineering and operations teams, across various organizations, are also taking systemic action to disrupt this threat, including the takedown of command and control infrastructure that powers the associated botnet. In addition, we have shared relevant technical information with trusted partners across the ecosystem, so that they can also harden their defenses and minimize the impact of this threat throughout the industry.

The BuzzFeed News report covers several fraud tactics (both web and mobile app) that are allegedly utilized by the same group. The web-based traffic is generated by a botnet that Google and others have been tracking, known as “TechSnab.” The TechSnab botnet is a small to medium-sized botnet that has existed for a few years. The number of active infections associated with TechSnab was reduced significantly after the Google Chrome Cleanup tool began prompting users to uninstall the malware.

In similar fashion to other botnets, this operates by creating hidden browser windows that visit web pages to inflate ad revenue. The malware contains common IP based cloaking, data obfuscation, and anti-analysis defenses. This botnet drove traffic to a ring of websites created specifically for this operation, and monetized with Google and many third party ad exchanges. As mentioned above, we began taking action on these websites earlier this year.

Based on analysis of historical ads.txt crawl data, inventory from these websites was widely available throughout the advertising ecosystem, and as many as 150 exchanges, supply-side platforms (SSPs) or networks may have sold this inventory. The botnet operators had hundreds of accounts across 88 different exchanges (based on accounts listed with “DIRECT” status in their ads.txt files).

This fraud primarily impacted mobile apps. We investigated those apps that were monetizing via AdMob and removed those that were engaged in this behavior from our ad network. The traffic from these apps seems to be a blend of organic user traffic and artificially inflated ad traffic, including traffic based on hidden ads. Additionally, we found the presence of several ad networks, indicating that it's likely many were being used for monetization. We are actively tracking this operation, and continually updating and improving our enforcement tactics.
Categories: Google Security Blog

Android Protected Confirmation: Taking transaction security to the next level

Google Security Blog - Fri, 10/19/2018 - 1:08pm
Posted by Janis Danisevskis, Information Security Engineer, Android Security

[Cross-posted from the Android Developers Blog]

In Android Pie, we introduced Android Protected Confirmation, the first major mobile OS API that leverages a hardware protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. This Trusted UI protects the choices you make from fraudulent apps or a compromised operating system. When an app invokes Protected Confirmation, control is passed to the Trusted UI, where transaction data is displayed and user confirmation of that data's correctness is obtained.
Once confirmed, your intention is cryptographically authenticated and unforgeable when conveyed to the relying party, for example, your bank. Protected Confirmation increases the bank's confidence that it acts on your behalf, providing a higher level of protection for the transaction.
Protected Confirmation also adds additional security relative to other forms of secondary authentication, such as a One Time Password or Transaction Authentication Number. These mechanisms can be frustrating for mobile users and also fail to protect against a compromised device that can corrupt transaction data or intercept one-time confirmation text messages.
Once the user approves a transaction, Protected Confirmation digitally signs the confirmation message. Because the signing key never leaves the Trusted UI's hardware sandbox, neither app malware nor a compromised operating system can fool the user into authorizing anything. Protected Confirmation signing keys are created using Android's standard AndroidKeyStore API. Before it can start using Android Protected Confirmation for end-to-end secure transactions, the app must enroll the public KeyStore key and its Keystore Attestation certificate with the remote relying party. The attestation certificate certifies that the key can only be used to sign Protected Confirmations.
There are many possible use cases for Android Protected Confirmation. At Google I/O 2018, the What's new in Android security session showcased partners planning to leverage Android Protected Confirmation in a variety of ways, including Royal Bank of Canada person to person money transfers; Duo Security, Nok Nok Labs, and ProxToMe for user authentication; and Insulet Corporation and Bigfoot Biomedical, for medical device control.
Insulet, a global leading manufacturer of tubeless patch insulin pumps, has demonstrated how they can modify their FDA cleared Omnipod DASH TM Insulin management system in a test environment to leverage Protected Confirmation to confirm the amount of insulin to be injected. This technology holds the promise for improved quality of life and reduced cost by enabling a person with diabetes to leverage their convenient, familiar, and secure smartphone for control rather than having to rely on a secondary, obtrusive, and expensive remote control device. (Note: The Omnipod DASH™ System is not cleared for use with Pixel 3 mobile device or Protected Confirmation).

This work is fulfilling an important need in the industry. Since smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulinSince smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulin pumps. A technology like Protected Confirmation plays an important role in gaining higher assurance of user intent and medical safety.
To integrate Protected Confirmation into your app, check out the Android Protected Confirmation training article. Android Protected Confirmation is an optional feature in Android Pie. Because it has low-level hardware dependencies, Protected Confirmation may not be supported by all devices running Android Pie. Google Pixel 3 and 3XL devices are the first to support Protected Confirmation, and we are working closely with other manufacturers to adopt this market-leading security innovation on more devices.
Categories: Google Security Blog

Modernizing Transport Security

Google Security Blog - Wed, 10/17/2018 - 4:20pm
Posted by David Benjamin, Chrome networking

*Updated on October 17, 2018 with details about changes in other browsers

TLS (Transport Layer Security) is the protocol which secures HTTPS. It has a long history stretching back to the nearly twenty-year-old TLS 1.0 and its even older predecessor, SSL. Over that time, we have learned a lot about how to build secure protocols.

TLS 1.2 was published ten years ago to address weaknesses in TLS 1.0 and 1.1 and has enjoyed wide adoption since then. Today only 0.5% of HTTPS connections made by Chrome use TLS 1.0 or 1.1. These old versions of TLS rely on MD5 and SHA-1, both now broken, and contain other flaws. TLS 1.0 is no longer PCI-DSS compliant and the TLS working group has adopted a document to deprecate TLS 1.0 and TLS 1.1.

In line with these industry standards, Google Chrome will deprecate TLS 1.0 and TLS 1.1 in Chrome 72. Sites using these versions will begin to see deprecation warnings in the DevTools console in that release. TLS 1.0 and 1.1 will be disabled altogether in Chrome 81. This will affect users on early release channels starting January 2020. Apple, Microsoft, and Mozilla have made similar announcements.

Site administrators should immediately enable TLS 1.2 or later. Depending on server software (such as Apache or nginx), this may be a configuration change or a software update. Additionally, we encourage all sites to revisit their TLS configuration. Chrome’s current criteria for modern TLS is the following:

  • TLS 1.2 or later.
  • An ECDHE- and AEAD-based cipher suite. AEAD-based cipher suites are those using AES-GCM or ChaCha20-Poly1305. ECDHE_RSA_WITH_AES_128_GCM_SHA256 is the recommended option for most sites.
  • The server signature should use SHA-2. Note this is not the signature in the certificate, made by the CA. Rather, it is the signature made by the server itself, using its private key.

The older options—CBC-mode cipher suites, RSA-encryption key exchange, and SHA-1 online signatures—all have known cryptographic flaws. Each has been removed in the newly-published TLS 1.3, which is supported in Chrome 70. We retain them at prior versions for compatibility with legacy servers, but we will be evaluating them over time for eventual deprecation.

None of these changes require obtaining a new certificate. Additionally, they are backwards-compatible. Where necessary, servers may enable both modern and legacy options, to continue to support legacy clients. Note, however, such support may carry security risks. (For example, see the DROWN, FREAK, and ROBOT attacks.)

Over the coming Chrome releases, we will improve the DevTools Security Panel to point out deviations from these settings, and suggest improvements to the site’s configuration.

Enterprise deployments can preview the TLS 1.0 and 1.1 removal today by setting the SSLVersionMin policy to “tls1.2”. For enterprise deployments that need more time, this same policy can be used to re-enable TLS 1.0 or TLS 1.1 until January 2021.
Categories: Google Security Blog

Building a Titan: Better security through a tiny chip

Google Security Blog - Wed, 10/17/2018 - 2:10pm

Posted by Nagendra Modadugu and Bill Richardson, Google Device Security Group

[Cross-posted from the Android Developers Blog]

At the Made by Google event last week, we talked about the combination of AI + Software + Hardware to help organize your information. To better protect that information at a hardware level, our new Pixel 3 and Pixel 3 XL devices include a Titan M chip.We briefly introduced Titan M and some of its benefits on our Keyword Blog, and with this post we dive into some of its technical details.
Titan M is a second-generation, low-power security module designed and manufactured by Google, and is a part of the Titan family. As described in the Keyword Blog post, Titan M performs several security sensitive functions, including:
  • Storing and enforcing the locks and rollback counters used by Android Verified Boot.
  • Securely storing secrets and rate-limiting invalid attempts at retrieving them using the Weaver API.
  • Providing backing for the Android Strongbox Keymaster module, including Trusted User Presence and Protected Confirmation. Titan M has direct electrical connections to the Pixel's side buttons, so a remote attacker can't fake button presses. These features are available to third-party apps, such as FIDO U2F Authentication.
  • Enforcing factory-reset policies, so that lost or stolen phones can only be restored to operation by the authorized owner.
  • Ensuring that even Google can't unlock a phone or install firmware updates without the owner's cooperation with Insider Attack Resistance.
Including Titan M in Pixel 3 devices substantially reduces the attack surface. Because Titan M is a separate chip, the physical isolation mitigates against entire classes of hardware-level exploits such as Rowhammer, Spectre, and Meltdown. Titan M's processor, caches, memory, and persistent storage are not shared with the rest of the phone's system, so side channel attacks like these—which rely on subtle, unplanned interactions between internal circuits of a single component—are nearly impossible. In addition to its physical isolation, the Titan M chip contains many defenses to protect against external attacks.
But Titan M is not just a hardened security microcontroller, but rather a full-lifecycle approach to security with Pixel devices in mind. Titan M's security takes into consideration all the features visible to Android down to the lowest level physical and electrical circuit design and extends beyond each physical device to our supply chain and manufacturing processes. At the physical level, we incorporated essential features optimized for the mobile experience: low power usage, low-latency, hardware crypto acceleration, tamper detection, and secure, timely firmware updates. We improved and invested in the supply chain for Titan M by creating a custom provisioning process, which provides us with transparency and control starting from the earliest silicon stages.
Finally, in the interest of transparency, the Titan M firmware source code will be publicly available soon. While Google holds the root keys necessary to sign Titan M firmware, it will be possible to reproduce binary builds based on the public source for the purpose of binary transparency.
A closer look at Titan MTitan (left) and Titan M (right)
Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions. The Titan M CPU core also exposes several control registers, which can be used to taper access to chip configuration settings and peripherals. Once powered on, Titan M verifies the signature of its flash-based firmware using a public key built into the chip's silicon. If the signature is valid, the flash is locked so it can't be modified, and then the firmware begins executing.
Titan M also features several hardware accelerators: AES, SHA, and a programmable big number coprocessor for public key algorithms. These accelerators are flexible and can either be initialized with keys provided by firmware or with chip-specific and hardware-bound keys generated by the Key Manager module. Chip-specific keys are generated internally based on entropy derived from the True Random Number Generator (TRNG), and thus such keys are never externally available outside the chip over its entire lifetime.
While implementing Titan M firmware, we had to take many system constraints into consideration. For example, packing as many security features into Titan M's 64 Kbytes of RAM required all firmware to execute exclusively off the stack. And to reduce flash-wear, RAM contents can be preserved even during low-power mode when most hardware modules are turned off.
The diagram below provides a high-level view of the chip components described here.

Better security through transparency and innovationAt the heart of our implementation of Titan M are two broader trends: transparency and building a platform for future innovation.
Transparency around every step of the design process — from logic gates to boot code to the applications — gives us confidence in the defenses we're providing for our users. We know what's inside, how it got there, how it works, and who can make changes.
Custom hardware allows us to provide new features, capabilities, and performance not readily available in off-the-shelf components. These changes allow higher assurance use cases like two-factor authentication, medical device control, P2P payments, and others that we will help develop down the road.
As more of our lives are bound up in our phones, keeping those phones secure and trustworthy is increasingly important. Google takes that responsibility seriously. Titan M is just the latest step in our continuing efforts to improve the privacy and security of all our users.
Categories: Google Security Blog

Google and Android have your back by protecting your backups

Google Security Blog - Fri, 10/12/2018 - 4:01pm
Posted by Troy Kensinger, Technical Program Manager, Android Security and Privacy

Android is all about choice. As such, Android strives to provide users many options to protect their data. By combining Android’s Backup Service and Google Cloud’s Titan Technology, Android has taken additional steps to securing users' data while maintaining their privacy.

Starting in Android Pie, devices can take advantage of a new capability where backed-up application data can only be decrypted by a key that is randomly generated at the client. This decryption key is encrypted using the user's lockscreen PIN/pattern/passcode, which isn’t known by Google. Then, this passcode-protected key material is encrypted to a Titan security chip on our datacenter floor. The Titan chip is configured to only release the backup decryption key when presented with a correct claim derived from the user's passcode. Because the Titan chip must authorize every access to the decryption key, it can permanently block access after too many incorrect attempts at guessing the user’s passcode, thus mitigating brute force attacks. The limited number of incorrect attempts is strictly enforced by a custom Titan firmware that cannot be updated without erasing the contents of the chip. By design, this means that no one (including Google) can access a user's backed-up application data without specifically knowing their passcode.

To increase our confidence that this new technology securely prevents anyone from accessing users' backed-up application data, the Android Security & Privacy team hired global cyber security and risk mitigation expert NCC Group to complete a security audit. Some of the outcomes included positives around Google’s security design processes, validation of code quality, and that mitigations for known attack vectors were already taken into account prior to launching the service. While there were some issues discovered during this audit, engineers corrected them quickly. For more details on how the end-to-end service works and a detailed report of NCC Group’s findings, click here.

Getting external reviews of our security efforts is one of many ways that Google and Android maintain transparency and openness which in turn helps users feel safe when it comes to their data. Whether it’s 100s of hours of gaming data or your personalized preferences in your favorite Google apps, our users' information is protected.

We want to acknowledge contributions from Shabsi Walfish, Software Engineering Lead, Identity and Authentication to this effort
Categories: Google Security Blog

Control Flow Integrity in the Android kernel

Google Security Blog - Fri, 10/12/2018 - 9:28am

Posted by Sami Tolvanen, Staff Software Engineer, Android Security & Privacy

[Cross-posted from the Android Developers Blog]

Android's security model is enforced by the Linux kernel, which makes it a tempting target for attackers. We have put a lot of effort into hardening the kernel in previous Android releases and in Android 9, we continued this work by focusing on compiler-based security mitigations against code reuse attacks.
Google's Pixel 3 will be the first Android device to ship with LLVM's forward-edge Control Flow Integrity (CFI) enforcement in the kernel, and we have made CFI support available in Android kernel versions 4.9 and 4.14. This post describes how kernel CFI works and provides solutions to the most common issues developers might run into when enabling the feature.
Protecting against code reuse attacksA common method of exploiting the kernel is using a bug to overwrite a function pointer stored in memory, such as a stored callback pointer or a return address that had been pushed to the stack. This allows an attacker to execute arbitrary parts of the kernel code to complete their exploit, even if they cannot inject executable code of their own. This method of gaining code execution is particularly popular with the kernel because of the huge number of function pointers it uses, and the existing memory protections that make code injection more challenging.
CFI attempts to mitigate these attacks by adding additional checks to confirm that the kernel's control flow stays within a precomputed graph. This doesn't prevent an attacker from changing a function pointer if a bug provides write access to one, but it significantly restricts the valid call targets, which makes exploiting such a bug more difficult in practice.

Figure 1. In an Android device kernel, LLVM's CFI limits 55% of indirect calls to at most 5 possible targets and 80% to at most 20 targets.Gaining full program visibility with Link Time Optimization (LTO)In order to determine all valid call targets for each indirect branch, the compiler needs to see all of the kernel code at once. Traditionally, compilers work on a single compilation unit (source file) at a time and leave merging the object files to the linker. LLVM's solution to CFI is to require the use of LTO, where the compiler produces LLVM-specific bitcode for all C compilation units, and an LTO-aware linker uses the LLVM back-end to combine the bitcode and compile it into native code.

Figure 2. A simplified overview of how LTO works in the kernel. All LLVM bitcode is combined, optimized, and generated into native code at link time.Linux has used the GNU toolchain for assembling, compiling, and linking the kernel for decades. While we continue to use the GNU assembler for stand-alone assembly code, LTO requires us to switch to LLVM's integrated assembler for inline assembly, and either GNU gold or LLVM's own lld as the linker. Switching to a relatively untested toolchain on a huge software project will lead to compatibility issues, which we have addressed in our arm64 LTO patch sets for kernel versions 4.9 and 4.14.
In addition to making CFI possible, LTO also produces faster code due to global optimizations. However, additional optimizations often result in a larger binary size, which may be undesirable on devices with very limited resources. Disabling LTO-specific optimizations, such as global inlining and loop unrolling, can reduce binary size by sacrificing some of the performance gains. When using GNU gold, the aforementioned optimizations can be disabled with the following additions to LDFLAGS:
LDFLAGS += -plugin-opt=-inline-threshold=0 \
-plugin-opt=-unroll-threshold=0Note that flags to disable individual optimizations are not part of the stable LLVM interface and may change in future compiler versions.
Implementing CFI in the Linux kernelLLVM's CFI implementation adds a check before each indirect branch to confirm that the target address points to a valid function with a correct signature. This prevents an indirect branch from jumping to an arbitrary code location and even limits the functions that can be called. As C compilers do not enforce similar restrictions on indirect branches, there were several CFI violations due to function type declaration mismatches even in the core kernel that we have addressed in our CFI patch sets for kernels 4.9 and 4.14.
Kernel modules add another complication to CFI, as they are loaded at runtime and can be compiled independently from the rest of the kernel. In order to support loadable modules, we have implemented LLVM's cross-DSO CFI support in the kernel, including a CFI shadow that speeds up cross-module look-ups. When compiled with cross-DSO support, each kernel module contains information about valid local branch targets, and the kernel looks up information from the correct module based on the target address and the modules' memory layout.

Figure 3. An example of a cross-DSO CFI check injected into an arm64 kernel. Type information is passed in X0 and the target address to validate in X1.CFI checks naturally add some overhead to indirect branches, but due to more aggressive optimizations, our tests show that the impact is minimal, and overall system performance even improved 1-2% in many cases.
Enabling kernel CFI for an Android deviceCFI for arm64 requires clang version >= 5.0 and binutils >= 2.27. The kernel build system also assumes that the LLVMgold.so plug-in is available in LD_LIBRARY_PATH. Pre-built toolchain binaries for clang and binutils are available in AOSP, but upstream binaries can also be used.
The following kernel configuration options are needed to enable kernel CFI:
CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=yUsing CONFIG_CFI_PERMISSIVE=y may also prove helpful when debugging a CFI violation or during device bring-up. This option turns a violation into a warning instead of a kernel panic.
As mentioned in the previous section, the most common issue we ran into when enabling CFI on Pixel 3 were benign violations caused by function pointer type mismatches. When the kernel runs into such a violation, it prints out a runtime warning that contains the call stack at the time of the failure, and the call target that failed the CFI check. Changing the code to use a correct function pointer type fixes the issue. While we have fixed all known indirect branch type mismatches in the Android kernel, similar problems may be still found in device specific drivers, for example.
CFI failure (target: [<fffffff3e83d4d80>] my_target_function+0x0/0xd80):
------------[ cut here ]------------
kernel BUG at kernel/cfi.c:32!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP

Call trace:

[<ffffff8752d00084>] handle_cfi_failure+0x20/0x28
[<ffffff8752d00268>] my_buggy_function+0x0/0x10
…Figure 4. An example of a kernel panic caused by a CFI failure.Another potential pitfall are address space conflicts, but this should be less common in driver code. LLVM's CFI checks only understand kernel virtual addresses and any code that runs at another exception level or makes an indirect call to a physical address will result in a CFI violation. These types of failures can be addressed by disabling CFI for a single function using the __nocfi attribute, or even disabling CFI for entire code files using the $(DISABLE_CFI) compiler flag in the Makefile.
static int __nocfi address_space_conflict()
{
void (*fn)(void);

/* branching to a physical address trips CFI w/o __nocfi */
fn = (void *)__pa_symbol(function_name);
cpu_install_idmap();
fn();
cpu_uninstall_idmap();

}Figure 5. An example of fixing a CFI failure caused by an address space conflict.Finally, like many hardening features, CFI can also be tripped by memory corruption errors that might otherwise result in random kernel crashes at a later time. These may be more difficult to debug, but memory debugging tools such as KASAN can help here.
ConclusionWe have implemented support for LLVM's CFI in Android kernels 4.9 and 4.14. Google's Pixel 3 will be the first Android device to ship with these protections, and we have made the feature available to all device vendors through the Android common kernel. If you are shipping a new arm64 device running Android 9, we strongly recommend enabling kernel CFI to help protect against kernel vulnerabilities.
LLVM's CFI protects indirect branches against attackers who manage to gain access to a function pointer stored in kernel memory. This makes a common method of exploiting the kernel more difficult. Our future work involves also protecting function return addresses from similar attacks using LLVM's Shadow Call Stack, which will be available in an upcoming compiler release.
Categories: Google Security Blog

Trustworthy Chrome Extensions, by Default

Google Security Blog - Mon, 10/01/2018 - 2:51pm
Posted by James Wagner, Chrome Extensions Product Manager

[Cross-posted from the Chromium blog]

Incredibly, it’s been nearly a decade since we launched the Chrome extensions system. Thanks to the hard work and innovation of our developer community, there are now more than 180,000 extensions in the Chrome Web Store, and nearly half of Chrome desktop users actively use extensions to customize Chrome and their experience on the web.

The extensions team's dual mission is to help users tailor Chrome’s functionality to their individual needs and interests, and to empower developers to build rich and useful extensions. But, first and foremost, it’s crucial that users be able to trust the extensions they install are safe, privacy-preserving, and performant. Users should always have full transparency about the scope of their extensions’ capabilities and data access.

We’ve recently taken a number of steps toward improved extension security with the launch of out-of-process iframes, the removal of inline installation, and significant advancements in our ability to detect and block malicious extensions using machine learning. Looking ahead, there are more fundamental changes needed so that all Chrome extensions are trustworthy by default.

Today we’re announcing some upcoming changes and plans for the future:

User controls for host permissions

Beginning in Chrome 70, users will have the choice to restrict extension host access to a custom list of sites, or to configure extensions to require a click to gain access to the current page.


While host permissions have enabled thousands of powerful and creative extension use cases, they have also led to a broad range of misuse - both malicious and unintentional - because they allow extensions to automatically read and change data on websites. Our aim is to improve user transparency and control over when extensions are able to access site data. In subsequent milestones, we’ll continue to optimize the user experience toward this goal while improving usability. If your extension requests host permissions, we encourage you to review our transition guide and begin testing as soon as possible.

Changes to the extensions review process

Going forward, extensions that request powerful permissions will be subject to additional compliance review. We’re also looking very closely at extensions that use remotely hosted code, with ongoing monitoring. Your extension’s permissions should be as narrowly-scoped as possible, and all your code should be included directly in the extension package, to minimize review time.
New code reliability requirements

Starting today, Chrome Web Store will no longer allow extensions with obfuscated code. This includes code within the extension package as well as any external code or resource fetched from the web. This policy applies immediately to all new extension submissions. Existing extensions with obfuscated code can continue to submit updates over the next 90 days, but will be removed from the Chrome Web Store in early January if not compliant.

Today over 70% of malicious and policy violating extensions that we block from Chrome Web Store contain obfuscated code. At the same time, because obfuscation is mainly used to conceal code functionality, it adds a great deal of complexity to our review process. This is no longer acceptable given the aforementioned review process changes.

Additionally, since JavaScript code is always running locally on the user's machine, obfuscation is insufficient to protect proprietary code from a truly motivated reverse engineer. Obfuscation techniques also come with hefty performance costs such as slower execution and increased file and memory footprints.

Ordinary minification, on the other hand, typically speeds up code execution as it reduces code size, and is much more straightforward to review. Thus, minification will still be allowed, including the following techniques:

  • Removal of whitespace, newlines, code comments, and block delimiters
  • Shortening of variable and function names
  • Collapsing the number of JavaScript files
If you have an extension in the store with obfuscated code, please review our updated content policies as well as our recommended minification techniques for Google Developers, and submit a new compliant version before January 1st, 2019.

Required 2-step verification
In 2019, enrollment in 2-Step Verification will be required for Chrome Web Store developer accounts. If your extension becomes popular, it can attract attackers who want to steal it by hijacking your account, and 2-Step Verification adds an extra layer of security by requiring a second authentication step from your phone or a physical security key. We strongly recommend that you enroll as soon as possible.
For even stronger account security, consider the Advanced Protection Program. Advanced protection offers the same level of security that Google relies on for its own employees, requiring a physical security key to provide the strongest defense against phishing attacks.

Looking ahead: Manifest v3
In 2019 we will introduce the next extensions manifest version. Manifest v3 will entail additional platform changes that aim to create stronger security, privacy, and performance guarantees. We want to help all developers fall into the pit of success; writing a secure and performant extension in Manifest v3 should be easy, while writing an insecure or non-performant extension should be difficult.
Some key goals of manifest v3 include:
  • More narrowly-scoped and declarative APIs, to decrease the need for overly-broad access and enable more performant implementation by the browser, while preserving important functionality
  • Additional, easier mechanisms for users to control the permissions granted to extensions
  • Modernizing to align with new web capabilities, such as supporting Service Workers as a new type of background process
We intend to make the transition to manifest v3 as smooth as possible and we’re thinking carefully about the rollout plan. We’ll be in touch soon with more specific details.
We recognize that some of the changes announced today may require effort in the future, depending on your extension. But we believe the collective result will be worth that effort for all users, developers, and for the long term health of the Chrome extensions ecosystem. We’re committed to working with you to transition through these changes and are very interested in your feedback. If you have questions or comments, please get in touch with us on the Chromium extensions forum.
Categories: Google Security Blog

Android and Google Play Security Rewards Programs surpass $3M in payouts

Google Security Blog - Thu, 09/20/2018 - 12:44pm
table, th, td { border: 1px solid black; } td { width:100px; }
Posted by Jason Woloz and Mayank Jain, Android Security & Privacy Team

[Cross-posted from the Android Developers Blog]

Our Android and Play security reward programs help us work with top researchers from around the world to improve Android ecosystem security every day. Thank you to all the amazing researchers who submitted vulnerability reports.

Android Security RewardsIn the ASR program's third year, we received over 470 qualifying vulnerability reports from researchers and the average pay per researcher jumped by 23%. To date, the ASR program has rewarded researchers with over $3M, paying out roughly $1M per year.
Here are some of the highlights from the Android Security Rewards program's third year:
  • There were no payouts for our highest possible reward: a complete remote exploit chain leading to TrustZone or Verified Boot compromise.
  • 99 individuals contributed one or more fixes.
  • The ASR program's reward averages were $2,600 per reward and $12,500 per researcher.
  • Guang Gong received our highest reward amount to date: $105,000 for his submission of a remote exploit chain.
As part of our ongoing commitment to security we regularly update our programs and policies based on ecosystem feedback. We also updated our severity guidelines for evaluating the impact of reported security vulnerabilities against the Android platform.

Google Play Security RewardsIn October 2017, we rolled out the Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play. So far, researchers have reported over 30 vulnerabilities through the program, earning a combined bounty amount of over $100K.
If undetected, these vulnerabilities could have potentially led to elevation of privilege, access to sensitive data and remote code execution on devices.

Keeping devices secureIn addition to rewarding for vulnerabilities, we continue to work with the broad and diverse Android ecosystem to protect users from issues reported through our program. We collaborate with manufacturers to ensure that these issues are fixed on their devices through monthly security updates. Over 250 device models have a majority of their deployed devices running a security update from the last 90 days. This table shows the models with a majority of deployed devices running a security update from the last three months:


ManufacturerDeviceANSL50AsusZenFone 5Z (ZS620KL/ZS621KL), ZenFone Max Plus M1 (ZB570TL), ZenFone 4 Pro (ZS551KL), ZenFone 5 (ZE620KL), ZenFone Max M1 (ZB555KL), ZenFone 4 (ZE554KL), ZenFone 4 Selfie Pro (ZD552KL), ZenFone 3 (ZE552KL), ZenFone 3 Zoom (ZE553KL), ZenFone 3 (ZE520KL), ZenFone 3 Deluxe (ZS570KL), ZenFone 4 Selfie (ZD553KL), ZenFone Live L1 (ZA550KL), ZenFone 5 Lite (ZC600KL), ZenFone 3s Max (ZC521TL)BlackBerryBlackBerry MOTION, BlackBerry KEY2BluGrand XL LTE, Vivo ONE, R2_3G, Grand_M2, BLU STUDIO J8 LTEbqAquaris V Plus, Aquaris V, Aquaris U2 Lite, Aquaris U2, Aquaris X, Aquaris X2, Aquaris X Pro, Aquaris U Plus, Aquaris X5 Plus, Aquaris U lite, Aquaris UDocomoF-04K, F-05J, F-03HEssential ProductsPH-1FujitsuF-01KGeneral MobileGM8, GM8 GoGooglePixel 2 XL, Pixel 2, Pixel XL, PixelHTCU12+, HTC U11+HuaweiHonor Note10, nova 3, nova 3i, Huawei Nova 3I, 荣耀9i, 华为G9青春版, Honor Play, G9青春版, P20 Pro, Honor V9, huawei nova 2, P20 lite, Honor 10, Honor 8 Pro, Honor 6X, Honor 9, nova 3e, P20, PORSCHE DESIGN HUAWEI Mate RS, FRD-L02, HUAWEI Y9 2018, Huawei Nova 2, Honor View 10, HUAWEI P20 Lite, Mate 9 Pro, Nexus 6P, HUAWEI Y5 2018, Honor V10, Mate 10 Pro, Mate 9, Honor 9, Lite, 荣耀9青春版, nova 2i, HUAWEI nova 2 Plus, P10 lite, nova 青春版本, FIG-LX1, HUAWEI G Elite Plus, HUAWEI Y7 2018, Honor 7S, HUAWEI P smart, P10, Honor 7C, 荣耀8青春版, HUAWEI Y7 Prime 2018, P10 Plus, 荣耀畅玩7X, HUAWEI Y6 2018, Mate 10 lite, Honor 7A, P9 Plus, 华为畅享8, honor 6x, HUAWEI P9 lite mini, HUAWEI GR5 2017, Mate 10ItelP13KyoceraX3LanixAlpha_950, Ilium X520LavaZ61, Z50LGELG Q7+, LG G7 ThinQ, LG Stylo 4, LG K30, V30+, LG V35 ThinQ, Stylo 2 V, LG K20 V, ZONE4, LG Q7, DM-01K, Nexus 5X, LG K9, LG K11MotorolaMoto Z Play Droid, moto g(6) plus, Moto Z Droid, Moto X (4), Moto G Plus (5th Gen), Moto Z (2) Force, Moto G (5S) Plus, Moto G (5) Plus, moto g(6) play, Moto G (5S), moto e5 play, moto e(5) play, moto e(5) cruise, Moto E4, Moto Z Play, Moto G (5th Gen)NokiaNokia 8, Nokia 7 plus, Nokia 6.1, Nokia 8 Sirocco, Nokia X6, Nokia 3.1OnePlusOnePlus 6, OnePlus5T, OnePlus3T, OnePlus5, OnePlus3OppoCPH1803, CPH1821, CPH1837, CPH1835, CPH1819, CPH1719, CPH1613, CPH1609, CPH1715, CPH1861, CPH1831, CPH1801, CPH1859, A83, R9s PlusPositivoTwist, Twist MiniSamsungGalaxy A8 Star, Galaxy J7 Star, Galaxy Jean, Galaxy On6, Galaxy Note9, Galaxy J3 V, Galaxy A9 Star, Galaxy J7 V, Galaxy S8 Active, Galaxy Wide3, Galaxy J3 Eclipse, Galaxy S9+, Galaxy S9, Galaxy A9 Star Lite, Galaxy J7 Refine, Galaxy J7 Max, Galaxy Wide2, Galaxy J7(2017), Galaxy S8+, Galaxy S8, Galaxy A3(2017), Galaxy Note8, Galaxy A8+(2018), Galaxy J3 Top, Galaxy J3 Emerge, Galaxy On Nxt, Galaxy J3 Achieve, Galaxy A5(2017), Galaxy J2(2016), Galaxy J7 Pop, Galaxy A6, Galaxy J7 Pro, Galaxy A6 Plus, Galaxy Grand Prime Pro, Galaxy J2 (2018), Galaxy S6 Active, Galaxy A8(2018), Galaxy J3 Pop, Galaxy J3 Mission, Galaxy S6 edge+, Galaxy Note Fan Edition, Galaxy J7 Prime, Galaxy A5(2016)Sharpシンプルスマホ4, AQUOS sense plus (SH-M07), AQUOS R2 SH-03K, X4, AQUOS R SH-03J, AQUOS R2 SHV42, X1, AQUOS sense lite (SH-M05)SonyXperia XZ2 Premium, Xperia XZ2 Compact, Xperia XA2, Xperia XA2 Ultra, Xperia XZ1 Compact, Xperia XZ2, Xperia XZ Premium, Xperia XZ1, Xperia L2, Xperia XTecnoF1, CAMON I AceVestelVestel Z20Vivovivo 1805, vivo 1803, V9 6GB, Y71, vivo 1802, vivo Y85A, vivo 1726, vivo 1723, V9, vivo 1808, vivo 1727, vivo 1724, vivo X9s Plus, Y55s, vivo 1725, Y66, vivo 1714, 1609, 1601VodafoneVodafone Smart N9XiaomiMi A2, Mi A2 Lite, MI 8, MI 8 SE, MIX 2S, Redmi 6Pro, Redmi Note 5 Pro, Redmi Note 5, Mi A1, Redmi S2, MI MAX 2, MI 6XZTEBLADE A6 MAX

Thank you to everyone internally and externally who helped make Android safer and stronger in the past year. Together, we made a huge investment in security research that helps Android users everywhere. If you want to get involved to make next year even better, check out our detailed program rules. For tips on how to submit complete reports, see Bug Hunter University.
Categories: Google Security Blog

Introducing the Tink cryptographic software library

Google Security Blog - Fri, 08/31/2018 - 4:00pm
Posted by Thai Duong, Information Security Engineer, on behalf of Tink team

At Google, many product teams use cryptographic techniques to protect user data. In cryptography, subtle mistakes can have serious consequences, and understanding how to implement cryptography correctly requires digesting decades' worth of academic literature. Needless to say, many developers don’t have time for that.

To help our developers ship secure cryptographic code we’ve developed Tink—a multi-language, cross-platform cryptographic library. We believe in open source and want Tink to become a community project—thus Tink has been available on GitHub since the early days of the project, and it has already attracted several external contributors. At Google, Tink is already being used to secure data of many products such as AdMob, Google Pay, Google Assistant, Firebase, the Android Search App, etc. After nearly two years of development, today we’re excited to announce Tink 1.2.0, the first version that supports cloud, Android, iOS, and more!

Tink aims to provide cryptographic APIs that are secure, easy to use correctly, and hard(er) to misuse. Tink is built on top of existing libraries such as BoringSSL and Java Cryptography Architecture, but includes countermeasures to many weaknesses in these libraries, which were discovered by Project Wycheproof, another project from our team.

With Tink, many common cryptographic operations such as data encryption, digital signatures, etc. can be done with only a few lines of code. Here is an example of encrypting and decrypting with our AEAD interface in Java:

import com.google.crypto.tink.Aead;    import com.google.crypto.tink.KeysetHandle;    import com.google.crypto.tink.aead.AeadFactory;    import com.google.crypto.tink.aead.AeadKeyTemplates;

   // 1. Generate the key material.    KeysetHandle keysetHandle = KeysetHandle.generateNew(        AeadKeyTemplates.AES256_EAX);

   // 2. Get the primitive.    Aead aead = AeadFactory.getPrimitive(keysetHandle);

   // 3. Use the primitive.    byte[] plaintext = ...;    byte[] additionalData = ...;    byte[] ciphertext = aead.encrypt(plaintext, additionalData);

Tink aims to eliminate as many potential misuses as possible. For example, if the underlying encryption mode requires nonces and nonce reuse makes it insecure, then Tink does not allow the user to pass nonces. Interfaces have security guarantees that must be satisfied by each primitive implementing the interface. This may exclude some encryption modes. Rather than adding them to existing interfaces and weakening the guarantees of the interface, it is possible to add new interfaces and describe the security guarantees appropriately.

We’re cryptographers and security engineers working to improve Google’s product security, so we built Tink to make our job easier. Tink shows the claimed security properties (e.g., safe against chosen-ciphertext attacks) right in the interfaces, allowing security auditors and automated tools to quickly discover usages where the security guarantees don’t match the security requirements. Tink also isolates APIs for potentially dangerous operations (e.g., loading cleartext keys from disk), which allows discovering, restricting, monitoring and logging their usage.

Tink provides support for key management, including key rotation and phasing out deprecated ciphers. For example, if a cryptographic primitive is found to be broken, you can switch to a different primitive by rotating keys, without changing or recompiling code.

Tink is also extensible by design: it is easy to add a custom cryptographic scheme or an in-house key management system so that it works seamlessly with other parts of Tink. No part of Tink is hard to replace or remove. All components are composable, and can be selected and assembled in various combinations. For example, if you need only digital signatures, you can exclude symmetric key encryption components to minimize code size in your application.

To get started, please check out our HOW-TO for Java, C++ and Obj-C. If you'd like to talk to the developers or get notified about project updates, you may want to subscribe to our mailing list. To join, simply send an empty email to tink-users+subscribe@googlegroups.com. You can also post your questions to StackOverflow, just remember to tag them with tink.

We’re excited to share this with the community, and welcome your feedback!
Categories: Google Security Blog

Evolution of Android Security Updates

Google Security Blog - Wed, 08/22/2018 - 2:59pm
Posted by Dave Kleidermacher, VP, Head of Security - Android, Chrome OS, Play

[Cross-posted from the Android Developers Blog]

At Google I/O 2018, in our What's New in Android Security session, we shared a brief update on the Android security updates program. With the official release of Android 9 Pie, we wanted to share a more comprehensive update on the state of security updates, including best practice guidance for manufacturers, how we're making Android easier to update, and how we're ensuring compliance to Android security update releases.
Commercial Best Practices around Android Security UpdatesAs we noted in our 2017 Android Security Year-in-Review, Android's anti-exploitation strength now leads the mobile industry and has made it exceedingly difficult and expensive to leverage operating system bugs into compromises. Nevertheless, an important defense-in-depth strategy is to ensure critical security updates are delivered in a timely manner. Monthly security updates are the recommended best practice for Android smartphones. We deliver monthly Android source code patches to smartphone manufacturers so they may incorporate those patches into firmware updates. We also deliver firmware updates over-the-air to Pixel devices on a reliable monthly cadence and offer the free use of Google's firmware over-the-air (FOTA) servers to manufacturers. Monthly security updates are also required for devices covered under the Android One program.
While monthly security updates are best, at minimum, Android manufacturers should deliver regular security updates in advance of coordinated disclosure of high severity vulnerabilities, published in our Android bulletins. Since the common vulnerability disclosure window is 90 days, updates on a 90-day frequency represents a minimum security hygiene requirement.
Enterprise Best PracticesProduct security factors into purchase decisions of enterprises, who often consider device security update cadence, flexibility of policy controls, and authentication features. Earlier this year, we introduced the Android Enterprise Recommended program to help businesses make these decisions. To be listed, Android devices must satisfy numerous requirements, including regular security updates: at least every 90 days, with monthly updates strongly recommended. In addition to businesses, consumers interested in understanding security update practices and commitment may also refer to the Enterprise Recommended list.
Making Android Easier to UpdateWe've also been working to make Android easier to update, overall. A key pillar of that strategy is to improve modularity and clarity of interfaces, enabling operating system subsystems to be updated without adversely impacting others. Project Treble is one example of this strategy in action and has enabled devices to update to Android P more easily and efficiently than was possible in previous releases. The modularity strategy applies equally well for security updates, as a framework security update can be performed independently of device specific components.
Another part of the strategy involves the extraction of operating system services into user-mode applications that can be updated independently, and sometimes more rapidly, than the base operating system. For example, Google Play services, including secure networking components, and the Chrome browser can be updated individually, just like other Google Play apps.
Partner programs are a third key pillar of the updateability strategy. One example is the GMS Express program, in which Google is working closely with system-on-chip (SoC) suppliers to provide monthly pre-integrated and pre-tested Android security updates for SoC reference designs, reducing cost and time to market for delivering them to users.
Security Patch Level ComplianceRecently, researchers reported a handful of missing security bug fixes across some Android devices. Initial reports had several inaccuracies, which have since been corrected. We have been developing security update testing systems that are now making compliance failures less likely to occur. In particular, we recently delivered a new testing infrastructure that enables manufacturers to develop and deploy automated tests across lower levels of the firmware stack that were previously relegated to manual testing. In addition, the Android build approval process now includes scanning of device images for specific patterns, reducing the risk of omission.
Looking ForwardIn 2017, about a billion Android devices received security updates, representing approximately 30% growth over the preceding year. We continue to work hard devising thoughtful strategies to make Android easier to update by introducing improved processes and programs for the ecosystem. In addition, we are also working to drive increased and more expedient partner adoption of our security update and compliance requirements. As a result, over coming quarters, we expect the largest ever growth in the number of Android devices receiving regular security updates.
Bugs are inevitable in all complex software systems, but exploitability of those bugs is not. We're working hard to ensure that the incidence of potentially harmful exploitation of bugs continues to decline, such that the frequency for security updates will reduce, not increase, over time. While monthly security updates represents today's best practice, we see a future in which security updates becomes easier and rarer, while maintaining the same goal to protect all users across all devices.
Categories: Google Security Blog

A reminder about government-backed phishing

Google Security Blog - Mon, 08/20/2018 - 8:42pm
Posted by Shane Huntley, Threat Analysis Group

TLDR: Government-backed phishing has been in the news lately. If you receive a warning in Gmail, be sure to take prompt action. Get two-factor authentication on your account. And consider enrolling in the Advanced Protection Program.

One of the main threats to all email users (whatever service you use) is phishing, attempts to trick you into providing a password that an attacker can use to sign into your account. Our ​improving ​technology has enabled ​us to ​significantly ​decrease ​the ​volume ​of ​phishing ​emails that ​get ​through to our users. ​ Automated ​protections, ​account ​security ​(like ​security ​keys), ​and specialized ​warnings give ​Gmail users industry-leading ​security.

Beyond phishing for the purposes of fraud, a small minority of users in all corners of the world are still targeted by sophisticated government-backed attackers. These attempts come from dozens of countries. Since 2012, we've shown prominent warnings within Gmail notifying users that they may be targets of these types of phishing attempts; we show thousands of these warnings every month, even if we have blocked the specific attempt.

We also send alerts to G Suite administrators if someone in their corporate network may have been the target of government-backed phishing. And we regularly post public advisories to make sure that people are aware of this risk.

This is what an account warning looks like; an extremely small fraction of users will ever see one of these, but if you receive this warning from us, it's important to take immediate action on it.
We intentionally send these notices in batches to all users who may be at risk, rather than at the moment we detect the threat itself, so that attackers cannot track some of our defense strategies. We have an expert team in our Threat Analysis Group, and we use a variety of technologies to detect these attempts. We also notify law enforcement about what we’re seeing; they have additional tools to investigate these attacks.

We hope you never receive this type of warning, but if you do, please take action right away to enhance the security of your accounts.

Even if you don’t receive such a warning, you should enable 2-step verification in Gmail. And if you think you’re at particular risk of government-backed phishing, consider enrolling in the Advanced Protection Program, which provides even stronger levels of security.
Categories: Google Security Blog

Expanding our Vulnerability Reward Program to combat platform abuse

Google Security Blog - Wed, 08/15/2018 - 12:00pm
Posted by Eric Brown and Marc Henson, Trust & Safety

Since 2010, Google’s Vulnerability Reward Programs have awarded more than $12 million dollars to researchers and created a thriving Google-focused security community. For the past two years, some of these rewards were for bug reports that were not strictly security vulnerabilities, but techniques that allow third parties to successfully bypass our abuse, fraud, and spam systems.

Today, we are expanding our Vulnerability Reward Program to formally invite researchers to submit these reports.

This expansion is intended to reward research that helps us mitigate potential abuse methods. A few examples of potentially valid reports for this program could include bypassing our account recovery systems at scale, identifying services vulnerable to brute force attacks, circumventing restrictions on content use and sharing, or purchasing items from Google without paying. Valid reports tend to result in changes to the product’s code, as opposed to removal of individual pieces of content.

This program does not cover individual instances of abuse, such as the posting of content that violates our guidelines or policies, sending spam emails, or providing links to malware. These should continue to be reported through existing product-specific channels, such as for Google+, YouTube, Gmail, and Blogger.

Reports submitted to our Vulnerability Reward Program that outline abuse methods are reviewed by experts on our Trust & Safety team, which specializes in the prevention and mitigation of abuse, fraud, and spam activity on our products.

We greatly value our relationship with the research community, and we’re excited to expand on it to help make the internet a safer place for everyone. To learn more, see our updated rules.

Happy hunting!
Categories: Google Security Blog

Google Public DNS turns 8.8.8.8 years old

Google Security Blog - Fri, 08/10/2018 - 9:31pm
Posted by Alexander Dupuy, Software Engineer

Once upon a time, we launched Google Public DNS, which you might know by its iconic IP address, 8.8.8.8. (Sunday, August 12th, 2018, at 00:30 UTC marks eight years, eight months, eight days and eight hours since the announcement.) Though not as well-known as Google Search or Gmail, the four eights have had quite a journey—and some pretty amazing growth! Whether it’s travelers in India’s train stations or researchers on the remote Antarctic island Bouvetøya, hundreds of millions of people the world over rely on our free DNS service to turn domain names like wikipedia.org into IP addresses like 208.80.154.224.
Google Public DNS query growth and major feature launches
Today, it’s estimated that about 10% of internet users rely on 8.8.8.8, and it serves well over a trillion queries per day. But while we’re really proud of that growth, what really matters is whether it’s a valuable service for our users. Namely, has Google Public DNS made the internet faster for users? Does it safeguard their privacy? And does it help them get to internet sites more reliably and securely?

In other words, has 8.8.8.8 made DNS and the internet better as a whole? Here at Google, we think it has. On this numerological anniversary, let’s take a look at how Google Public DNS has realized those goals and what lies ahead.
Making the internet faster

From the start, a key goal of Google Public DNS was to make the internet faster. When we began the project in 2007, Google had already made it faster to search the web, but it could take a while to get to your destination. Back then, most DNS lookups used your ISP’s resolvers, and with small caches, they often had to make multiple DNS queries before they could return an address.

Google Public DNS resolvers’ DNS caches hold tens of billions of entries worldwide. And because hundreds of millions of clients use them every day, they usually return the address for your domain queries without extra lookups, connecting you to the internet that much faster.
DNS resolution process for example.org
Speeding up DNS responses is just one part of making the web faster—getting web content from servers closer to you can have an even bigger impact. Content Delivery Networks (CDNs) distribute large, delay-sensitive content like streaming videos to users around the world. CDNs use DNS to direct users to the nearest servers, and rely on GeoIP maps to determine the best location.

Everything’s good if your DNS query comes from an ISP resolver that is close to you, but what happens if the resolver is far away, as it is for researchers on Bouvetøya? In that case, the CDN directs you to a server near the DNS resolver—but not the one closest to you. In 2010, along with other DNS and CDN services, we proposed a solution that lets DNS resolvers send part of your IP address in their DNS queries, so CDN name servers can get your best possible GeoIP location (short of sending your entire IP address). By sending only the first three parts of users’ IP addresses (e.g. 192.0.2.x) in the EDNS Client Subnet (ECS) extension, CDNs can return the closest content while maintaining user privacy.

We continue to enhance ECS, (now published as RFC 7871), for example, by adding automatic detection of name server ECS support. And today, we’re happy to report, support for ECS is widespread among CDNs.

Safeguarding user privacy

From day one of our service, we’ve always been serious about user privacy. Like all Google services, we honor the general Google Privacy Policy, and are guided by Google’s Privacy Principles. In addition, Google Public DNS published a privacy practice statement about the information we collect and how it is used—and how it’s not used. These protect the privacy of your DNS queries once they arrive at Google, but they can still be seen (and potentially modified) en route to 8.8.8.8.

To address this weakness, we launched a public beta of DNS-over-HTTPS on April 1, 2016, embedding your DNS queries in the secure and private HTTPS protocol. Despite the launch date, this was not an April Fool’s joke, and in the following two years, it has grown dramatically, with millions of users and support by another major public DNS service. Today, we are working in the IETF and with other DNS operators and clients on the Internet Draft for DNS Queries over HTTPS specification, which we also support.

Securing the Domain Name System

We’ve always been very concerned with the integrity and security of the responses that Google Public DNS provides. From the start, we rejected the practice of hijacking nonexistent domain (NXDOMAIN) responses, working to provide users with accurate and honest DNS responses, even when attackers tried to corrupt them.

In 2008, Dan Kaminsky publicized a major security weakness in the DNS protocol that left most DNS resolvers vulnerable to spoofing that poisoned their DNS caches. When we launched 8.8.8.8 the following year, we not only used industry best practices to mitigate this vulnerability, but also developed an extensive set of additional protections.

While those protected our DNS service from most attackers, they can’t help in cases where an attacker can see our queries. Starting in 2010, the internet started to use DNSSEC security in earnest, making it possible to protect cryptographically signed domains against such man-in-the-middle and man-on-the-side attacks. In 2013, Google Public DNS became the first major public DNS resolver to implement DNSSEC validation for all its DNS queries, doubling the percentage of end users protected by DNSSEC from 3.3% to 8.1%.

In addition to protecting the integrity of DNS responses, Google Public DNS also works to block DNS denial of service attacks by rate limiting both our queries to name servers and reflection or amplification attacks that try to flood victims’ network connections.

Internet access for all

A big part of Google Public DNS’s tremendous growth comes from free public internet services. We make the internet faster for hundreds of these services, from free WiFi in San Francisco’s parks to LinkNYC internet kiosk hotspots and the Railtel partnership in India‘s train stations. In places like Africa and Southeast Asia, many ISPs also use 8.8.8.8 to resolve their users’ DNS queries. Providing free DNS resolution to anyone in the world, even to other companies, supports internet access worldwide as a part of Google’s Next Billion Users initiative.

APNIC Labs map of worldwide usage (Interactive Map)
Looking ahead


Today, Google Public DNS is the largest public DNS resolver. There are now about a dozen such services providing value-added features like content and malware filtering, and recent entrants Quad9 and Cloudflare also provide privacy for DNS queries over TLS or HTTPS.

But recent incidents that used BGP hijacking to attack DNS are concerning. Increasing the adoption and use of DNSSEC is an effective way to protect against such attacks and as the largest DNSSEC validating resolver, we hope we can influence things in that direction. We are also exploring how to improve the security of the path from resolvers to authoritative name servers—issues not currently addressed by other DNS standards.

In short, we continue to improve Google Public DNS both behind the scenes and in ways visible to users, adding features that users want from their DNS service. Stay tuned for some exciting Google Public DNS announcements in the near future!
Categories: Google Security Blog

Mitigating Spectre with Site Isolation in Chrome

Google Security Blog - Thu, 07/19/2018 - 10:44am
Posted by Charlie Reis, Site Isolator

Speculative execution side-channel attacks like Spectre are a newly discovered security risk for web browsers. A website could use such attacks to steal data or login information from other websites that are open in the browser. To better mitigate these attacks, we're excited to announce that Chrome 67 has enabled a security feature called Site Isolation on Windows, Mac, Linux, and Chrome OS. Site Isolation has been optionally available as an experimental enterprise policy since Chrome 63, but many known issues have been resolved since then, making it practical to enable by default for all desktop Chrome users.

This launch is one phase of our overall Site Isolation project. Stay tuned for additional security updates that will mitigate attacks beyond Spectre (e.g., attacks from fully compromised renderer processes).

What is Spectre?

In January, Google Project Zero disclosed a set of speculative execution side-channel attacks that became publicly known as Spectre and Meltdown. An additional variant of Spectre was disclosed in May. These attacks use the speculative execution features of most CPUs to access parts of memory that should be off-limits to a piece of code, and then use timing attacks to discover the values stored in that memory. Effectively, this means that untrustworthy code may be able to read any memory in its process's address space.

This is particularly relevant for web browsers, since browsers run potentially malicious JavaScript code from multiple websites, often in the same process. In theory, a website could use such an attack to steal information from other websites, violating the Same Origin Policy. All major browsers have already deployed some mitigations for Spectre, including reducing timer granularity and changing their JavaScript compilers to make the attacks less likely to succeed. However, we believe the most effective mitigation is offered by approaches like Site Isolation, which try to avoid having data worth stealing in the same process, even if a Spectre attack occurs.

What is Site Isolation?

Site Isolation is a large change to Chrome's architecture that limits each renderer process to documents from a single site. As a result, Chrome can rely on the operating system to prevent attacks between processes, and thus, between sites. Note that Chrome uses a specific definition of "site" that includes just the scheme and registered domain. Thus, https://google.co.uk would be a site, and subdomains like https://maps.google.co.uk would stay in the same process.

Chrome has always had a multi-process architecture where different tabs could use different renderer processes. A given tab could even switch processes when navigating to a new site in some cases. However, it was still possible for an attacker's page to share a process with a victim's page. For example, cross-site iframes and cross-site pop-ups typically stayed in the same process as the page that created them. This would allow a successful Spectre attack to read data (e.g., cookies, passwords, etc.) belonging to other frames or pop-ups in its process.

When Site Isolation is enabled, each renderer process contains documents from at most one site. This means all navigations to cross-site documents cause a tab to switch processes. It also means all cross-site iframes are put into a different process than their parent frame, using "out-of-process iframes." Splitting a single page across multiple processes is a major change to how Chrome works, and the Chrome Security team has been pursuing this for several years, independently of Spectre. The first uses of out-of-process iframes shipped last year to improve the Chrome extension security model.
A single page may now be split across multiple renderer processes using out-of-process iframes.
Even when each renderer process is limited to documents from a single site, there is still a risk that an attacker's page could access and leak information from cross-site URLs by requesting them as subresources, such as images or scripts. Web browsers generally allow pages to embed images and scripts from any site. However, a page could try to request an HTML or JSON URL with sensitive data as if it were an image or script. This would normally fail to render and not expose the data to the page, but that data would still end up inside the renderer process where a Spectre attack might access it. To mitigate this, Site Isolation includes a feature called Cross-Origin Read Blocking (CORB), which is now part of the Fetch spec. CORB tries to transparently block cross-site HTML, XML, and JSON responses from the renderer process, with almost no impact to compatibility. To get the most protection from Site Isolation and CORB, web developers should check that their resources are served with the right MIME type and with the nosniff response header.

Site Isolation is a significant change to Chrome's behavior under the hood, but it generally shouldn't cause visible changes for most users or web developers (beyond a few known issues). It simply offers more protection between websites behind the scenes. Site Isolation does cause Chrome to create more renderer processes, which comes with performance tradeoffs: on the plus side, each renderer process is smaller, shorter-lived, and has less contention internally, but there is about a 10-13% total memory overhead in real workloads due to the larger number of processes. Our team continues to work hard to optimize this behavior to keep Chrome both fast and secure.

How does Site Isolation help?

In Chrome 67, Site Isolation has been enabled for 99% of users on Windows, Mac, Linux, and Chrome OS. (Given the large scope of this change, we are keeping a 1% holdback for now to monitor and improve performance.) This means that even if a Spectre attack were to occur in a malicious web page, data from other websites would generally not be loaded into the same process, and so there would be much less data available to the attacker. This significantly reduces the threat posed by Spectre.

Because of this, we are planning to re-enable precise timers and features like SharedArrayBuffer (which can be used as a precise timer) for desktop.

What additional work is in progress?

We're now investigating how to extend Site Isolation coverage to Chrome for Android, where there are additional known issues. Experimental enterprise policies for enabling Site Isolation will be available in Chrome 68 for Android, and it can be enabled manually on Android using chrome://flags/#enable-site-per-process.

We're also working on additional security checks in the browser process, which will let Site Isolation mitigate not just Spectre attacks but also attacks from fully compromised renderer processes. These additional enforcements will let us reach the original motivating goals for Site Isolation, where Chrome can effectively treat the entire renderer process as untrusted. Stay tuned for an update about these enforcements! Finally, other major browser vendors are finding related ways to defend against Spectre by better isolating sites. We are collaborating with them and are happy to see the progress across the web ecosystem.

Help improve Site Isolation!

We offer cash rewards to researchers who submit security bugs through the Chrome Vulnerability Reward Program. For a limited time, security bugs affecting Site Isolation may be eligible for higher rewards levels, up to twice the usual amount for information disclosure bugs. Find out more about Chrome New Feature Special Rewards.
Categories: Google Security Blog

Compiler-based security mitigations in Android P

Google Security Blog - Wed, 06/27/2018 - 5:27pm
Posted by Ivan Lozano, Information Security Engineer

[Cross-posted from the Android Developers Blog]

Android's switch to LLVM/Clang as the default platform compiler in Android 7.0 opened up more possibilities for improving our defense-in-depth security posture. In the past couple of releases, we've rolled out additional compiler-based mitigations to make bugs harder to exploit and prevent certain types of bugs from becoming vulnerabilities. In Android P, we're expanding our existing compiler mitigations, which instrument runtime operations to fail safely when undefined behavior occurs. This post describes the new build system support for Control Flow Integrity and Integer Overflow Sanitization.
Control Flow IntegrityA key step in modern exploit chains is for an attacker to gain control of a program's control flow by corrupting function pointers or return addresses. This opens the door to code-reuse attacks where an attacker executes arbitrary portions of existing program code to achieve their goals, such as counterfeit-object-oriented and return-oriented programming. Control Flow Integrity (CFI) describes a set of mitigation technologies that confine a program's control flow to a call graph of valid targets determined at compile-time.
While we first supported LLVM's CFI implementation in select components in Android O, we're greatly expanding that support in P. This implementation focuses on preventing control flow manipulation via indirect branches, such as function pointers and virtual functions—the 'forward-edges' of a call graph. Valid branch targets are defined as function entry points for functions with the expected function signature, which drastically reduces the set of allowable destinations an attacker can call. Indirect branches are instrumented to detect runtime violations of the statically determined set of allowable targets. If a violation is detected because a branch points to an unexpected target, then the process safely aborts.

Figure 1. Assembly-level comparison of a virtual function call with and without CFI enabled. For example, Figure 1 illustrates how a function that takes an object and calls a virtual function gets translated into assembly with and without CFI. For simplicity, this was compiled with -O0 to prevent compiler optimization. Without CFI enabled, it loads the object's vtable pointer and calls the function at the expected offset. With CFI enabled, it performs a fast-path first check to determine if the pointer falls within an expected range of addresses of compatible vtables. Failing that, execution falls through to a slow path that does a more extensive check for valid classes that are defined in other shared libraries. The slow path will abort execution if the vtable pointer points to an invalid target.
With control flow tightly restricted to a small set of legitimate targets, code-reuse attacks become harder to utilize and some memory corruption vulnerabilities become more difficult or even impossible to exploit.
In terms of performance impact, LLVM's CFI requires compiling with Link-Time Optimization (LTO). LTO preserves the LLVM bitcode representation of object files until link-time, which allows the compiler to better reason about what optimizations can be performed. Enabling LTO reduces the size of the final binary and improves performance, but increases compile time. In testing on Android, the combination of LTO and CFI results in negligible overhead to code size and performance; in a few cases both improved.
For more technical details about CFI and how other forward-control checks are handled, see the LLVM design documentation.
For Android P, CFI is enabled by default widely within the media frameworks and other security-critical components, such as NFC and Bluetooth. CFI kernel support has also been introduced into the Android common kernel when building with LLVM, providing the option to further harden the trusted computing base. This can be tested today on the HiKey reference boards.
Integer Overflow SanitizationThe UndefinedBehaviorSanitizer's (UBSan) signed and unsigned integer overflow sanitization was first utilized when hardening the media stack in Android Nougat. This sanitization is designed to safely abort process execution if a signed or unsigned integer overflows by instrumenting arithmetic instructions which may overflow. The end result is the mitigation of an entire class of memory corruption and information disclosure vulnerabilities where the root cause is an integer overflow, such as the original Stagefright vulnerability.
Because of their success, we've expanded usage of these sanitizers in the media framework with each release. Improvements have been made to LLVM's integer overflow sanitizers to reduce the performance impact by using fewer instructions in ARM 32-bit and removing unnecessary checks. In testing, these improvements reduced the sanitizers' performance overhead by over 75% in Android's 32-bit libstagefright library for some codecs. Improved Android build system support, such as better diagnostics support, more sensible crashes, and globally sanitized integer overflow targets for testing have also expedited the rollout of these sanitizers.
We've prioritized enabling integer overflow sanitization in libraries where complex untrusted input is processed or where there have been security bulletin-level integer overflow vulnerabilities reported. As a result, in Android P the following libraries now benefit from this mitigation:
  • libui
  • libnl
  • libmediaplayerservice
  • libexif
  • libdrmclearkeyplugin
  • libreverbwrapper
Future PlansMoving forward, we're expanding our use of these mitigation technologies and we strongly encourage vendors to do the same with their customizations. More information about how to enable and test these options will be available soon on the Android Open Source Project.
Acknowledgements: This post was developed in joint collaboration with Vishwath Mohan, Jeffrey Vander Stoep, Joel Galenson, and Sami Tolvanen
Categories: Google Security Blog

Better Biometrics in Android P

Google Security Blog - Thu, 06/21/2018 - 2:46pm
Posted by Vishwath Mohan, Security Engineer

[Cross-posted from the Android Developers Blog]

To keep users safe, most apps and devices have an authentication mechanism, or a way to prove that you're you. These mechanisms fall into three categories: knowledge factors, possession factors, and biometric factors. Knowledge factors ask for something you know (like a PIN or a password), possession factors ask for something you have (like a token generator or security key), and biometric factors ask for something you are (like your fingerprint, iris, or face).

Biometric authentication mechanisms are becoming increasingly popular, and it's easy to see why. They're faster than typing a password, easier than carrying around a separate security key, and they prevent one of the most common pitfalls of knowledge-factor based authentication—the risk of shoulder surfing.
As more devices incorporate biometric authentication to safeguard people's private information, we're improving biometrics-based authentication in Android P by:
  • Defining a better model to measure biometric security, and using that to functionally constrain weaker authentication methods.
  • Providing a common platform-provided entry point for developers to integrate biometric authentication into their apps.
A better security model for biometricsCurrently, biometric unlocks quantify their performance today with two metrics borrowed from machine learning (ML): False Accept Rate (FAR), and False Reject Rate (FRR).
In the case of biometrics, FAR measures how often a biometric model accidentally classifies an incorrect input as belonging to the target user—that is, how often another user is falsely recognized as the legitimate device owner. Similarly, FRR measures how often a biometric model accidentally classifies the user's biometric as incorrect—that is, how often a legitimate device owner has to retry their authentication. The first is a security concern, while the second is problematic for usability.
Both metrics do a great job of measuring the accuracy and precision of a given ML (or biometric) model when applied to random input samples. However, because neither metric accounts for an active attacker as part of the threat model, they do not provide very useful information about its resilience against attacks.
In Android 8.1, we introduced two new metrics that more explicitly account for an attacker in the threat model: Spoof Accept Rate (SAR) and Imposter Accept Rate (IAR). As their names suggest, these metrics measure how easily an attacker can bypass a biometric authentication scheme. Spoofing refers to the use of a known-good recording (e.g. replaying a voice recording or using a face or fingerprint picture), while impostor acceptance means a successful mimicking of another user's biometric (e.g. trying to sound or look like a target user).
Strong vs. Weak BiometricsWe use the SAR/IAR metrics to categorize biometric authentication mechanisms as either strong or weak. Biometric authentication mechanisms with an SAR/IAR of 7% or lower are strong, and anything above 7% is weak. Why 7% specifically? Most fingerprint implementations have a SAR/IAR metric of about 7%, making this an appropriate standard to start with for other modalities as well. As biometric sensors and classification methods improve, this threshold can potentially be decreased in the future.
This binary classification is a slight oversimplification of the range of security that different implementations provide. However, it gives us a scalable mechanism (via the tiered authentication model) to appropriately scope the capabilities and the constraints of different biometric implementations across the ecosystem, based on the overall risk they pose.
While both strong and weak biometrics will be allowed to unlock a device, weak biometrics:
  • require the user to re-enter their primary PIN, pattern, password or a strong biometric to unlock a device after a 4-hour window of inactivity, such as when left at a desk or charger. This is in addition to the 72-hour timeout that is enforced for both strong and weak biometrics.
  • are not supported by the forthcoming BiometricPrompt API, a common API for app developers to securely authenticate users on a device in a modality-agnostic way.
  • can't authenticate payments or participate in other transactions that involve a KeyStore auth-bound key.
  • must show users a warning that articulates the risks of using the biometric before it can be enabled.
These measures are intended to allow weaker biometrics, while reducing the risk of unauthorized access.
BiometricPrompt APIStarting in Android P, developers can use the BiometricPrompt API to integrate biometric authentication into their apps in a device and biometric agnostic way. BiometricPrompt only exposes strong modalities, so developers can be assured of a consistent level of security across all devices their application runs on. A support library is also provided for devices running Android O and earlier, allowing applications to utilize the advantages of this API across more devices .
Here's a high-level architecture of BiometricPrompt.

The API is intended to be easy to use, allowing the platform to select an appropriate biometric to authenticate with instead of forcing app developers to implement this logic themselves. Here's an example of how a developer might use it in their app:

ConclusionBiometrics have the potential to both simplify and strengthen how we authenticate our digital identity, but only if they are designed securely, measured accurately, and implemented in a privacy-preserving manner.
We want Android to get it right across all three. So we're combining secure design principles, a more attacker-aware measurement methodology, and a common, easy to use biometrics API that allows developers to integrate authentication in a simple, consistent, and safe manner.
Acknowledgements: This post was developed in joint collaboration with Jim Miller
Categories: Google Security Blog

End-to-end encryption for push messaging, simplified

Google Security Blog - Fri, 06/08/2018 - 5:39pm
Posted by Giles Hogben, Privacy Engineer and Milinda Perera, Software Engineer

[Cross-posted from the Android Developers Blog]

Developers already use HTTPS to communicate with Firebase Cloud Messaging (FCM). The channel between FCM server endpoint and the device is encrypted with SSL over TCP. However, messages are not encrypted end-to-end (E2E) between the developer server and the user device unless developers take special measures.
To this end, we advise developers to use keys generated on the user device to encrypt push messages end-to-end. But implementing such E2E encryption has historically required significant technical knowledge and effort. That is why we are excited to announce the Capillary open source library which greatly simplifies the implementation of E2E-encryption for push messages between developer servers and users' Android devices.
We also added functionality for sending messages that can only be decrypted on devices that have recently been unlocked. This is designed to support for decrypting messages on devices using File-Based Encryption (FBE): encrypted messages are cached in Device Encrypted (DE) storage and message decryption keys are stored in Android Keystore, requiring user authentication. This allows developers to specify messages with sensitive content, that remain encrypted in cached form until the user has unlocked and decrypted their device.
The library handles:
  • Crypto functionality and key management across all versions of Android back to KitKat (API level 19).
  • Key generation and registration workflows.
  • Message encryption (on the server) and decryption (on the client).
  • Integrity protection to prevent message modification.
  • Caching of messages received in unauthenticated contexts to be decrypted and displayed upon device unlock.
  • Edge-cases, such as users adding/resetting device lock after installing the app, users resetting app storage, etc.
The library supports both RSA encryption with ECDSA authentication and Web Push encryption, allowing developers to re-use existing server-side code developed for sending E2E-encrypted Web Push messages to browser-based clients.
Along with the library, we are also publishing a demo app (at last, the Google privacy team has its own messaging app!) that uses the library to send E2E-encrypted FCM payloads from a gRPC-based server implementation.
What it's not
  • The open source library and demo app are not designed to support peer-to-peer messaging and key exchange. They are designed for developers to send E2E-encrypted push messages from a server to one or more devices. You can protect messages between the developer's server and the destination device, but not directly between devices.
  • It is not a comprehensive server-side solution. While core crypto functionality is provided, developers will need to adapt parts of the sample server-side code that are specific to their architecture (for example, message composition, database storage for public keys, etc.)
You can find more technical details describing how we've architected and implemented the library and demo here.
Categories: Google Security Blog

Insider attack resistance

Google Security Blog - Fri, 06/01/2018 - 2:31pm
Posted by Shawn Willden, Staff Software Engineer

[Cross-posted from the Android Developers Blog]

Our smart devices, such as mobile phones and tablets, contain a wealth of personal information that needs to be kept safe. Google is constantly trying to find new and better ways to protect that valuable information on Android devices. From partnering with external researchers to find and fix vulnerabilities, to adding new features to the Android platform, we work to make each release and new device safer than the last. This post talks about Google's strategy for making the encryption on Google Pixel 2 devices resistant to various levels of attack—from platform, to hardware, all the way to the people who create the signing keys for Pixel devices.

We encrypt all user data on Google Pixel devices and protect the encryption keys in secure hardware. The secure hardware runs highly secure firmware that is responsible for checking the user's password. If the password is entered incorrectly, the firmware refuses to decrypt the device. This firmware also limits the rate at which passwords can be checked, making it harder for attackers to use a brute force attack.

To prevent attackers from replacing our firmware with a malicious version, we apply digital signatures. There are two ways for an attacker to defeat the signature checks and install a malicious replacement for firmware: find and exploit vulnerabilities in the signature-checking process or gain access to the signing key and get their malicious version signed so the device will accept it as a legitimate update. The signature-checking software is tiny, isolated, and vetted with extreme thoroughness. Defeating it is hard. The signing keys, however, must exist somewhere, and there must be people who have access to them.

In the past, device makers have focused on safeguarding these keys by storing the keys in secure locations and severely restricting the number of people who have access to them. That's good, but it leaves those people open to attack by coercion or social engineering. That's risky for the employees personally, and we believe it creates too much risk for user data.

To mitigate these risks, Google Pixel 2 devices implement insider attack resistance in the tamper-resistant hardware security module that guards the encryption keys for user data. This helps prevent an attacker who manages to produce properly signed malicious firmware from installing it on the security module in a lost or stolen device without the user's cooperation. Specifically, it is not possible to upgrade the firmware that checks the user's password unless you present the correct user password. There is a way to "force" an upgrade, for example when a returned device is refurbished for resale, but forcing it wipes the secrets used to decrypt the user's data, effectively destroying it.
The Android security team believes that insider attack resistance is an important element of a complete strategy for protecting user data. The Google Pixel 2 demonstrated that it's possible to protect users even against the most highly-privileged insiders. We recommend that all mobile device makers do the same. For help, device makers working to implement insider attack resistance can reach out to the Android security team through their Google contact.

Acknowledgements: This post was developed in joint collaboration with Paul Crowley, Senior Software Engineer
Categories: Google Security Blog

Keeping 2 billion Android devices safe with machine learning

Google Security Blog - Thu, 05/24/2018 - 3:23pm

Posted by Sai Deep Tetali, Software Engineer, Google Play Protect
[Cross-posted from the Android Developers Blog]

At Google I/O 2017, we introduced Google Play Protect, our comprehensive set of security services for Android. While the name is new, the smarts powering Play Protect have protected Android users for years.
Google Play Protect's suite of mobile threat protections are built into more than 2 billion Android devices, automatically taking action in the background. We're constantly updating these protections so you don't have to think about security: it just happens. Our protections have been made even smarter by adding machine learning elements to Google Play Protect.
Security at scale
Google Play Protect provides in-the-moment protection from potentially harmful apps (PHAs), but Google's protections start earlier.
Before they're published in Google Play, all apps are rigorously analyzed by our security systems and Android security experts. Thanks to this process, Android devices that only download apps from Google Play are 9 times less likely to get a PHA than devices that download apps from other sources.
After you install an app, Google Play Protect continues its quest to keep your device safe by regularly scanning your device to make sure all apps are behaving properly. If it finds an app that is misbehaving, Google Play Protect either notifies you, or simply removes the harmful app to keep your device safe.
Our systems scan over 50 billion apps every day. To keep on the cutting edge of security, we look for new risks in a variety of ways, such as identifying specific code paths that signify bad behavior, investigating behavior patterns to correlate bad apps, and reviewing possible PHAs with our security experts.
In 2016, we added machine learning as a new detection mechanism and it soon became a critical part of our systems and tools.
Training our machines
In the most basic terms, machine learning means training a computer algorithm to recognize a behavior. To train the algorithm, we give it hundreds of thousands of examples of that behavior.
In the case of Google Play Protect, we are developing algorithms that learn which apps are "potentially harmful" and which are "safe." To learn about PHAs, the machine learning algorithms analyze our entire catalog of applications. Then our algorithms look at hundreds of signals combined with anonymized data to compare app behavior across the Android ecosystem to find PHAs. They look for behavior common to PHAs, such as apps that attempt to interact with other apps on the device, access or share your personal data, download something without your knowledge, connect to phishing websites, or bypass built-in security features.
When we find apps exhibit similar malicious behavior, we group them into families. Visualizing these PHA families helps us uncover apps that share similarities to known bad apps, but have yet remained under our radar.

After we identify a new PHA, we confirm our findings with expert security reviews. If the app in question is a PHA, Google Play Protect takes action on the app and then we feed information about that PHA back into our algorithms to help find more PHAs.
Doubling down on securitySo far, our machine learning systems have successfully detected 60.3% of the malware identified by Google Play Protect in 2017.
In 2018, we're devoting a massive amount of computing power and talent to create, maintain and improve these machine learning algorithms. We're constantly leveraging artificial intelligence and our highly skilled researchers and engineers from all across Google to find new ways to keep Android devices safe and secure. In addition to our talented team, we work with the foremost security experts and researchers from around the world. These researchers contribute even more data and insights to keep Google Play Protect on the cutting edge of mobile security.
To check out Google Play Protect, open the Google Play app and tap Play Protect in the left panel.
Acknowledgements: This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Damien Octeau, Sebastian Porst, Chuangang Ren, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, Zhikun Wang, and Mo Yu.
Categories: Google Security Blog

Pages