Electronic Freedom Foundation

Don’t Let Encrypted Messaging Become a Hollow Promise

EFF - Fri, 07/19/2019 - 2:13pm

Why do we care about encryption? Why was it a big deal, at least in theory, when Mark Zuckerberg announced earlier this year that Facebook would move to end-to-end encryption on all three of its messaging platforms? We don’t just support encryption for its own sake. We fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world.

And although encryption may be the backbone, it’s important to recognize that protecting digital security and privacy encompasses much more; it’s also about additional technical features and policy choices that support the privacy and security goals that encryption enables.

But as we careen from one attack on encryption after another by governments from Australia to India to Singapore to Kazakhstan, we risk losing sight of this bigger picture. Even if encryption advocates could “win” this seemingly forever crypto war, it would be a hollow victory if it came at the expense of broader security. Some efforts—a recent proposal from Germany comes to mind—are as hamfisted as ever, attempting to give government the power to demand the plaintext of any encrypted message. But others, like the GCHQ’s “Ghost” proposal, purport to give governments the ability to listen in on end-to-end encrypted communications without “weakening encryption or defeating the end-to-end nature of the service.” And, relevant to Facebook’s announcement, we’ve seen suggestions that providers could still find ways of filtering or blocking certain content, even when it is encrypted with a key the provider doesn’t hold.

So, as governments and others try to find ways to surveil and moderate private messages, it leads us to ask: What policy choices are incompatible with secure messaging? We know that the answer has to be more than “don’t break encryption,” because, well, GCHQ already has a comeback to that one. Even when a policy choice technically maintains the mathematical components of end-to-end encryption, it can still violate the expectations users associate with secure communication.

So our answer, in short, is: a secure messenger should guarantee that no one but you and your intended recipients can read your messages or otherwise analyze their contents to infer what you are talking about. Any time a messaging app has to add “unless...” to that guarantee, whether in response to legislation or internal policy decisions, it’s a sign that messenger is delivering compromised security to its users.

EFF considers the following signs that a messenger is not delivering end-to-end encryption: client-side scanning, law enforcement “ghosts,” and unencrypted backups. In each of these cases, your messages remain between you and your intended recipient, unless...

Client-side scanning

Your messages stay between you and your recipient....unless you send something that matches up to a database of problematic content.

End-to-end encryption is meant to protect your messages from any outside party, including network eavesdroppers, law enforcement, and the messaging company itself. But the company could determine the contents of certain end-to-end encrypted messages if it implemented a technique called client-side scanning.

Sometimes called “endpoint filtering” or “local processing,” this privacy-invasive proposal works like this: every time you send a message, software that comes with your messaging app first checks it against a database of “hashes,” or unique digital fingerprints, usually of images or videos. If it finds a match, it may refuse to send your message, notify the recipient, or even forward it to a third party, possibly without your knowledge.

Hash-matching is already a common practice among email services, hosting providers, social networks, and other large services that allow users to upload and share their own content. One widely used tool is PhotoDNA, created by Microsoft to detect child exploitation images. It allows providers to automatically detect and prevent this content from being uploaded to their networks and to report it to law enforcement. But because services like PhotoDNA run on company servers, they cannot be used with an end-to-end encrypted messaging service, leading to the proposal that providers of these services should do this scanning “client-side,” on the device itself.

The prevention of child exploitation imagery might seem to be a uniquely strong case for client-side scanning on end-to-end encrypted services. But it’s safe to predict that once messaging platforms introduce this capability, it will likely be used to filter a wide range of other content. Indeed, we’ve already seen a proposal that Whatsapp create “an updatable list of rumors and fact-checks” that would be downloaded to each phone and compared to messages to “warn users before they share known misinformation.” We can expect to see similar attempts to screen end-to-end messaging for “extremist” content and copyright infringement. There are good reasons to be wary of this sort of filtering of speech when it is done on public social media sites, but using it in the context of encrypted messaging is a much more extreme step, fully undermining users’ ability to carry out a private conversation.

Because all of the scanning and comparison takes place on your device, rather than in the cloud, advocates of this technique argue that it does not break end-to-end encryption: your message still travels between its two “ends”—you and your recipient—fully encrypted. But it’s simply not end-to-end encryption if a company’s software is sitting on one of the “ends” silently looking over your shoulder and pre-filtering all the messages you send.

Messengers can make the choice to implement client-side scanning. However, if they do, they violate the user expectations associated with end-to-end encryption, and cannot claim to be offering it.

Law enforcement “ghosts”

Your messages stay between you and your recipient...unless law enforcement compels a company to add a silent onlooker to your conversation.

Another proposed tweak to encrypted messaging is the GCHQ’s “Ghost” proposal, which its authors describe like this:

It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorize today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.

But as EFF has written before, this requires the provider to lie to its customers, actively suppressing any notification or UX feature that allow users to verify who is participating in a conversation. Encryption without this kind of notification simply does not meet the bar for security.

Unencrypted backups by default

Your messages stay between you and your recipient......unless you back up your messages.

Messaging apps will often give users the option to back up their messages, so that conversations can be recovered if a phone is lost or destroyed. Mobile operating systems iOS and Android offer similar options to back up one’s entire phone. If conversation history from a “secure” messenger is backed up to the cloud unencrypted (or encrypted in a way that allows the company running the backup to access message contents), then the messenger might as well not have been end-to-end encrypted to begin with.

Instead, a messenger can choose to encrypt the backups under a key kept on the user’s device or a password that only the users know, or it can choose to not encrypt the backups. If a messenger chooses not to encrypt backups, then they should be off by default and users should have an opportunity to understand the implications of turning them on.

For example, WhatsApp provides a mechanism to back messages up to the cloud. In order to back messages up in a way that makes them restorable without a passphrase in the future, these backups need to be stored unencrypted at rest. Upon first install, WhatsApp prompts you to choose how often you wish to backup your messages: daily, weekly, monthly, or never.  In EFF’s Surveillance Self-Defense, we advise users to never back up their WhatsApp messages to the cloud, since that would deliver unencrypted copies of your message log to the cloud provider. In order for your communications to be truly secure, any contact you chat with must do the same.

Continuing the fight

In the 1990s, we had to fight hard in the courts, and in software, to defend the right to use encryption strong enough to protect online communications; in the 2000s, we watched mass government and corporate surveillance undermine everything online that was not defended by that encryption, deployed end-to-end. But there will always be attempts to find a weakness in those protections. And right now, that weakness lies in our acceptance of surveillance in our devices. We see that in attempts to implement client-side scanning, mandate deceptive user interfaces, or leak plaintext from our devices and apps. Keeping everyone’s communications safe means making sure we don’t hand over control of our devices to companies, governments, or other third parties.

Victory: Oakland City Council Votes to Ban Government Use of Face Surveillance

EFF - Thu, 07/18/2019 - 6:41pm

Earlier this week, Oakland’s City Council voted unanimously to ban local government use of face surveillance. The amendment to Oakland’s Surveillance and Community Safety Ordinance will make Oakland the third U.S. city to take this critical step toward protecting the safety, privacy, and civil liberties of its residents. 

Local governments like those in San Francisco, CA; Somerville, MA; and now Oakland, CA are leading the way in proactively heading off the threat of this particularly pernicious form of surveillance. However, after a series of hearings by the House Oversight Committee, national and international policymakers have also begun to look closely at the technology’s threat to human rights and civil liberties. 

On the same day that Oakland’s City Council voted to ban government use of the technology, the House of Representatives passed a bipartisan amendment to the Intelligence Authorization Act (H.R. 3494) that would require the Director of National Intelligence to report on the use of face surveillance by intelligence agencies. David Kaye, the United Nations Special Rapporteur on freedom of opinion and expression, has also called for a moratorium on face surveillance saying, "Surveillance tools can interfere with human rights, from the right to privacy and freedom of expression to rights of association and assembly."

Over the last several years, EFF has continuously voiced concerns over the First and Fourth Amendment implications of government use of face surveillance. These concerns are exacerbated by research conducted by MIT’s Media Lab regarding the technology’s high error rates for women and people of color. However, even if manufacturers are successful in addressing the technology’s substantially higher error rates for already marginalized communities, government use of face recognition technology will still threaten safety and privacy, chill free speech, and amplify historical and ongoing discrimination in our criminal system.

Even as Oakland’s face surveillance ban awaits a procedural second reading, lawmakers and community members across the country are considering their own prohibitions and moratoriums on their local government’s use. This week, the Public Safety Committee in the neighboring city of Berkeley, CA held a hearing on their own proposed ban, and lawmakers across the country took to Twitter to share news of their like intentions.

Massachusetts residents, beyond Somerville, hoping to protect their communities from face surveillance should contact their state lawmakers in support of S.1385 and H.1538, the proposed bills calling for a moratorium throughout the Commonwealth. Outside of Massachusetts, as governing bodies across the country adjourn for their summer recess, now is an opportune time to call on your own representatives to take a stand for the rights of their constituents, by banning government use of face surveillance in your community. 

SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects

EFF - Thu, 07/18/2019 - 6:06pm

Before there was Big Tech, there was "adversarial interoperability": when someone decides to compete with a dominant company by creating a product or service that "interoperates" (works with) its offerings.

In tech, "network effects" can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn't just have to be better than Facebook, it has to be so much better than Facebook that it's worth using, even though all the people you want to talk to are still on Facebook. That's a tall order.

Adversarial interoperability is judo for network effects, using incumbents' dominance against them. To see how that works, let's look at a historical example of adversarial interoperability role in helping to unseat a monopolist's dominance.

The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn't open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn't play nice with Macs.

But as office networking spread, the battle moved to a new front: networking compatibility. AppleTalk, Apple's proprietary protocol for connecting up Macs and networked devices like printers, pretty much Just Worked, providing you were using a Mac. If you were using a Windows PC, you had to install special, buggy, unreliable software.

And for Apple users hoping to fit in at Windows shops, the problems were even worse: Windows machines used the SMB protocol for file-sharing and printers, and Microsoft's support for MacOS was patchy at best, nonexistent at worst, and costly besides. Businesses sorted themselves into Mac-only and PC-only silos, and if a Mac shop needed a PC (for the accounting software, say), it was often cheaper and easier just to get the accountant their own printer and backup tape-drive, rather than try to get that PC to talk to the network. Likewise, all PC-shops with a single graphic designer on a Mac—that person would often live offline, disconnected from the office network, tethered to their own printer, with their own stack of Mac-formatted ZIP cartridges or CD-ROMs.

All that started to change in 1993: that was the year that an Australian PhD candidate named Andrew Tridgell licensed his SAMBA package as free/open source software and exposed it to the wide community of developers looking to connect their non-Microsoft computers—Unix and GNU/Linux servers, MacOS workstations—to the dominant Microsoft LANs.

SAMBA was created by using a "packet sniffer" to ingest raw SMB packets as they traversed a local network; these intercepted packets gave Tridgell the insight he needed to reverse-engineer Microsoft's proprietary networking protocol. Tridgell prioritized compatibility with LAN Manager, a proprietary Network Operating System that enterprise networks made heavy use of. If SAMBA could be made to work in LAN Manager networks, then you could connect a Mac to a PC network—or vice-versa—and add some Unix servers and use a mix of SAMBA and SMB to get them all to play nice with one another.

The timing of Tridgell's invention was crucial: in 1993, Microsoft had just weathered the Federal Trade Commission’s antitrust investigation of its monopoly tactics, squeaking through thanks to a 2-2 deadlock among the commissioners, and was facing down a monopoly investigation by the Department of Justice.

The growth of local-area networks greatly accelerated Microsoft's dominance. It's one thing to dominate the desktop, another entirely to leverage that dominance so that no one else can make an operating system that connects to networks that include computers running that dominant system. Network administrators of the day were ready to throw in the towel and go all-Microsoft for everything from design workstations to servers.

SAMBA changed all that. What's more, as Microsoft updated SMB, SAMBA matched them, relying on a growing cadre of software authors who relied on SAMBA to keep their own networks running.

The emergence of SAMBA in the period when Microsoft's dominance was at its peak, the same year that the US government tried and failed to address that dominance, was one of the most salutary bits of timing in computing history, carving out a new niche for Microsoft's operating system rivals that gave them space to breathe and grow. It's certainly possible that without SAMBA, Microsoft could have leveraged its operating system, LAN and application dominance to crush all rivals.

So What Happened?

We don't see a lot of SAMBA-style stories anymore, despite increased concentration of various sectors of the tech market and a world crying out for adversarial interoperability judo throws.

Indeed, investors seem to have lost their appetite for funding companies that might disrupt the spectacularly profitable Internet monopolists of 2019, ceding them those margins and deeming their territory to be a "kill zone."

VCs have not lost their appetite for making money, and toolsmiths have not lost the urge to puncture the supposedly airtight bubbles around the Big Tech incumbents, so why is it so hard to find a modern David with the stomach to face off against 2019's Goliaths?

To find the answer, look to the law. As monopolists have conquered more and more of the digital realm, they have invested some of those supernormal profits in law and policy that lets them fend off adversarial interoperators.

One legal weapon is "Terms of Service": both Facebook and Blizzard have secured judgments giving their fine print the force of law, and now tech giants use clickthrough agreements that amount to, "By clicking here, you promise that you won't try to adversarially interoperate with us."

A modern SAMBA project would have to contend with this liability, and Microsoft would argue that anyone who took the step of installing SMB had already agreed that they wouldn't try to reverse-engineer it to make a compatible product.

Then there's "anti-circumvention," a feature of 1998's Digital Millennium Copyright Act (DMCA). Under Section 1201 of the DMCA, bypassing a "copyright access control" can put you in both criminal and civil jeopardy, regardless of whether there's any copyright infringement. DMCA 1201 was originally used to stop companies from making region-free DVD players or modding game consoles to play unofficial games (neither of which is a copyright violation!).

But today, DMCA 1201 is used to control competitors, critics, and customers. Any device with software in it contains a "copyrighted work," so manufacturers need only set up an "access control" and they can exert legal control over all kinds of uses of the product.

Their customers can only use the product in ways that don't involve bypassing the "access control," and that can be used to force you to buy only one brand of ink or use apps from only one app store.

Their critics—security researchers auditing their cybersecurity—can't publish proof-of-concept to back up their claims about vulnerabilities in the systems.

And competitors can't bypass access controls to make compatible products: third party app stores, compatible inks, or a feature-for-feature duplicate of a dominant company's networking protocol.

Someone attempting to replicate the SAMBA creation feat in 2019 would likely come up against an access control that needed to be bypassed in order to peer inside the protocol's encrypted outer layer in order to create a feature-compatible tool to use in competing products.

Another thing that's changed (for the worse) since 1993 is the proliferation of software patents. Software patenting went into high gear around 1994 and consistently gained speed until 2014, when Alice v. CLS Bank put the brakes on (today, Alice is under threat). After decades of low-quality patents issuing from the US Patent and Trademark Office, there are so many trivial, obvious and overlapping software patents in play that anyone trying to make a SAMBA-like product would run a real risk of being threatened with expensive litigation for patent infringement.

This thicket of legal anti-adversarial-interoperability dangers has been a driver of market concentration, and the beneficiaries of market concentration have also spent lavishly to expand and strengthen the thicket. It's gotten so bad that even some "open standards organizations" have standardized easy-to-use ways of legally prohibiting adversarial interoperability, locking in the dominance of the largest browser vendors.

The idea that wildly profitable businesses would be viewed as unassailable threats by investors and entrepreneurs (rather than as irresistible targets) tells you everything you need to know about the state of competition today. As we look to cut the Big Tech giants down to size, let's not forget that tech once thronged with Davids eager to do battle with Goliaths, and that this throng would be ours to command again, if only we would re-arm it.

A Bad Copyright Bill Moves Forward With No Serious Understanding of Its Dangers

EFF - Thu, 07/18/2019 - 4:15pm

The Senate Judiciary Committee voted on the Copyright Alternative in Small-Claims Enforcement Act, aka the CASE Act. This was without any hearings for experts to explain the huge flaws in the bill as it’s currently written. And flaws there are.

We’ve seen some version of the CASE Act pop up for years now, and the problems with the bill have never been addressed satisfactorily. This is still a bill that puts people in danger of huge, unappealable money judgments from a quasi-judicial system—not an actual court—for the kind of Internet behavior that most people engage in without thinking.

During the vote in the Senate Judiciary Committee, it was once again stressed that the CASE Act—which would turn the Copyright Office into a copyright traffic court—created a “voluntary” system.

“Voluntary” does not accurately describe the regime of the CASE Act. The CASE Act does allow people who receive notices from the Copyright Office to “opt-out” of the system. The average person is not really going to understand what is going on, other than that they’ve received what looks like a legal summons.

Take Action

Tell the Senate Not to Enable Copyright Trolls

Furthermore, the CASE Act gives people just 60 days from receiving the notice to opt-out, so long as they do so in writing “in accordance with regulations established by the Register of Copyrights,” which in no way promises that opting out will be a simple process, understandable to everyone. But because the system is opt-out, and the goal of the system Is presumably to move as many cases through it as possible, the Copyright Office has little incentive to make opting out fair to respondents and easy to do.

That leaves opting out as something most easily taken advantage of by companies and people who have lawyers who can advise them of the law and leaves the average Internet user at risk of having a huge judgment handed down by the Copyright Office. At first, those judgments can be up to $30,000, enough to bankrupt many people in the U.S., and that cap can grow even higher without any more action by Congress. And the “Copyright Claims Board” created by the CASE Act can issue those judgments to those who don’t show up. A system that can award default judgments like this is not “voluntary.”

We know how this will go because we’ve seen this kind of confusion and fear with the DMCA. People receive DMCA notices and, unaware of their rights or intimidated by the requirements of a counter-notice, let their content disappear even if it’s fair use. The CASE Act makes it extremely easy to collect against people using the Internet the way everyone does: sharing memes, photos, and video.

If the CASE Act was not opt-out, but instead required respondents to give affirmative consent, or “opt-in,” at least the Copyright Office would have greater incentive to design proceedings that safeguard the respondents’ interests and have clear standards that everyone can understand. With both sides choosing to litigate in the Copyright Office, it’s that much harder for copyright trolls to use the system to get huge awards in a place that is friendly to copyright holders.

We said this the last time the CASE Act was proposed and we’ll say it again: Creating a quasi-court focused exclusively on copyright with the power to pass judgment on parties in private disputes invites abuse. It encourages copyright trolling by inviting filing as many copyright claims as one can against whoever is least likely to opt-out—ordinary Internet users who can be coerced into paying thousands of dollars to escape the process, whether they infringed copyright or not.

Copyright law fundamentally impacts freedom of expression. People shouldn’t be funneled to a system that hands out huge damage awards with less care than a traffic ticket gets.

Take Action

Tell the Senate Not to Enable Copyright Trolls

Sharpening Our Claws: Teaching Privacy Badger to Fight More Third-Party Trackers

EFF - Wed, 07/17/2019 - 7:10pm

The latest release of Privacy Badger gives it the power to detect and block a new class of evasive, pervasive third-party trackers, including Google Analytics.

Most blocking tools, like uBlock Origin, Ghostery, and Firefox’s native blocking mode (using Disconect’s block lists), use human-curated lists to decide whether to block or allow third-party resources. But Privacy Badger is different. Rather than rely on a list of known trackers, it discovers and learns to block new trackers in the wild. It works using heuristics, or patterns of behavior, to identify trackers.

Last week, we updated Privacy Badger with a new heuristic to help it identify trackers that have flown under its radar in the past. Here’s how it works.

What makes a tracker a tracker?

All tracker-blockers have to grapple with a fundamental question: what is a tracker, anyway? Often times, it’s obvious. When an ad network sets a third-party cookie and uses it to build a profile of your browsing history, it’s tracking you. But other times, it’s not so straightforward. Is a content delivery network (CDN) that serves static files across the web a tracker? What about a third-party image host? How do you decide what to block, and what to allow?

List-based blockers have humans make those decisions on a case-by-case basis. The idea is to keep a list of every single domain or URL that might be tracking you on the web. EasyPrivacy, the popular tracker list used by AdBlock Plus and others, has almost 17,000 entries. Creating a list like that requires tens of thousands of judgment calls by human beings, and maintaining it means making those decisions over and over again, for as long as the list is in use.

Privacy Badger takes a different approach. We try to define what tracking behavior looks like, so that any third-party domain that acts that way is likely to be a tracker, and any one that doesn’t is likely to be benign. Then we let the extension decide on each request it sees for us. When new trackers appear on the web, or formerly benign companies start tracking users, Privacy Badger learns to block them without any help from us. If a company that doesn’t intend to track its users becomes blocked anyway, it can adopt a legally-binding Do Not Track (DNT) policy that commits the company to respecting users’ privacy. Privacy Badger will then automatically unblock that company’s resources.

This makes our choice of heuristics extremely important. Trying to write rules that will identify every single tracker on the Web, present and future, is a Sisyphean task. Tracking on the Web is always developing, and the most creative trackers will probably always be able to circumvent our detection. Our goal is to detect and block the vast majority of trackers that users encounter on a daily basis—and to make the surveillance business model a little less profitable.

Cookie Sharing: third-party tracking at a first-party price

For most of its history, Privacy Badger has used three main heuristics to identify tracking behavior:

  • Third-party cookies. Cookies are the simplest and most common tracking tools on the web. Privacy Badger considers a domain to be tracking if it sets a third-party cookie with enough information to uniquely identify an individual user. In our own experiments, we’ve found that around 98% of the tracking activity identified by Privacy Badger uses third-party cookies.

  • Local storage “supercookies.” Third-party domains that can run JavaScript are able to set values in the browser’s local storage, then retrieve them later to track user activity across sites. Because these values act a lot like cookies but can evade common cookie-blocking tactics, they are sometimes referred to as “supercookies.” Privacy Badger looks for reads and writes of lots of information to third-party local storage and marks those as tracking.
  • Canvas fingerprinting. Trackers can also use JavaScript to try to extract a “browser fingerprint,” a value that can uniquely identify your device without the use of stored values like cookies. Privacy Badger looks for some of the most common kinds of fingerprinting using the HTML canvas, and marks those actions as tracking.

These heuristics help Privacy Badger identify and block the majority of tracking requests on the web. But a while ago, we noticed that one particularly notorious data collector was evading our filters: Google Analytics.

Because Google Analytics doesn’t use third-party cookies, local storage supercookies, or browser fingerprinting to collect data about users, it wasn’t caught by any of Privacy Badger’s existing heuristics. However, it is a silent passenger on a huge portion of the Web, and one that collects information about users and sends that data back to Google. Moreover, Google Analytics is included on nearly every popular human-curated block list, including EasyPrivacy and Disconnect. Any intuitive definition of “tracking” probably includes what Google Analytics does, but Privacy Badger’s definition didn’t.

What Google Analytics does make use of is cookie sharing. Cookie sharing is a tracking technique most often used by third-party analytics services. It can also help trackers sidestep restrictions on third-party cookies, like Safari’s Intelligent Tracking Protection (ITP) and Firefox’s default content blocking.

It works like this: When you visit a website, the page loads a piece of JavaScript from a third-party server. That JavaScript runs in a first-party context and sets a cookie associated with the first-party domain, like “example.com.” Your browser allows the third-party JavaScript (running as part of the first-party page) to read and update the cookie. Then, the JavaScript sends off a request to the third-party tracker. Normally, cookies are automatically sent alongside requests, and the browser controls who sees what cookies—it wouldn’t allow a first-party cookie to be sent to a third party like Google. However, since Google’s script is able to access the cookie, it can stick the cookie value right into the request itself (specifically, into the “query string” portion of the request). Google receives the identifier from the first-party cookie and uses it to link the request back to a user profile.

Cookie-sharing trackers, including Google Analytics, often rely on “tracking pixels” to work. A tracking pixel is typically an invisible, 1x1 “image” that is placed on a web page for the sole purpose of triggering a request to a third party. That means most cookie sharing is undetectable to all but the most tech-savvy users.

On its own, cookie sharing is usually not as effective at tracking users as traditional third-party cookies. Because the first-party sites don’t share cookie data with each other, a third-party tracker will associate the same user with a different cookie value on each first-party site the user visits. This makes it more difficult for the tracker to link data from different first-party sites to the same user. But if the tracker sees requests from the same user on two different first-party sites in rapid succession, it can use other identifying information, like IP address or TLS state, to link different cookie values to the same user. Google Analytics is present by some measures on as much 80% of the Web, so it gets information about nearly every site most users visit, and linking those identities together is a cinch. But don’t take our word for it; Google’s privacy policy says as much (emphasis ours):

Google Analytics relies on first-party cookies, which means the cookies are set by the Google Analytics customer. Using our systems, data generated through Google Analytics can be linked by the Google Analytics customer and by Google to third-party cookies that are related to visits to other websites.

Let’s look at how this might work in practice. Imagine a user browses to a handful of sites over the course of a day, each of which has the same cookie-sharing tracker on it. The table below shows the information that the tracker gets from each visit.

Time

Source

Shared cookie ID

IP address

10:25 am

theguardian.com

abc123

192.168.124.101

10:46 am

studentaid.ed.gov

def456

192.168.124.101

2:41 pm

newegg.com

cba321

192.168.131.92

2:55 pm

plannedparenthood.org

xyz789

192.168.131.92

3:02 pm

theguardian.com

abc123

192.168.131.92

On the first two sites the user visits, the tracker sets a first-party cookie in the user’s browser that is unique to the first-party site. The user’s ID cookie is “abc123” on The Guardian’s website, and “def456” on ed.gov. But because the user visits the two sites in rapid succession, their IP address remains the same, so the tracker can infer that the “abc123” user and the “def456” user are one and the same.

Later in the day, the same user gets back online and visits Newegg and Planned Parenthood’s websites. The user’s IP address has changed, so the tracker doesn’t know that the “cba321” and “xyz789” identities point to the same user as before. However, the user then re-visits The Guardian’s website, and the tracker sees another request from user “abc123” coming from the new IP. This tells the tracker that the requests from Planned Parenthood and Newegg probably came from the same user, and lets it link all the recent activity it’s seen back to a single identity.

Detecting Cookie Sharing

In the latest update, Privacy Badger has a new heuristic to detect cookie sharing. Every time it sees a third-party request, it runs a series of checks:

  1. Is the request an image request? The majority of cookie sharing uses 1x1 pixel “images,” so we ignore other kinds of requests.
  2. Does the request URL have query arguments? These are usually used to convey extra information with an image request.
  3. Do any of the query arguments contain a long segment of a first-party cookie? This is what we’re really looking for. If any of the query arguments have a large chunk of information (8 characters or more) in common with any of the first-party cookies on the page, the request is probably trying to share a tracking cookie.

If all of the above conditions match, Privacy Badger logs the request as a tracking action.

After building the new heuristic, we tested it using Badger Sett, our in-house tool for scanning the web with Privacy Badger. We scanned the top 10,000 first-party websites on the Majestic Million and recorded the number of times each third-party domain was logged taking a particular tracking action. This allows us to see which new domains Privacy Badger will learn to block using the new heuristic, as well as to make sure it doesn’t mark too many benign requests as tracking.

The table below shows the five domains that Privacy Badger newly identified as tracking on the most sites:

Tracking domain

Business

Number of first-party sites domain was seen tracking on

google-analytics.com

Third-party analytics

5479

chartbeat.net

Third-party analytics

659

nexac.com

Advertising

220

bouncex.net

Identity resolution

151

alexametrics.com

Analytics, market research

140

Google Analytics is by far the most common tracker identified by the new heuristic, but all of the top five are what we would consider “trackers.” Four of the five are included on the Disconnect blocklist (the list used by Firefox’s content-blocking feature): Google Analytics, Chartbeat, Nexac, and Amazon’s Alexa Metrics. The fifth, BounceX, advertises itself as a service to “accurately recognize and market to the actual person behind every visit in real-time.” Sounds an awful lot like tracking to us.

Track Changes

The techniques used by trackers are always evolving, so Privacy Badger’s countermeasures have to evolve, too. In the process of developing the new cookie-sharing heuristic, we learned more about how to evaluate and iterate on our detection metrics. As a result, Privacy Badger is stronger than ever. When the next generation of corporate surveillance technology hits the web, we’ll be ready.

Install Privacy Badger

House Judiciary Committee Continues Its Antitrust Inquiry Into the Internet Marketplace

EFF - Wed, 07/17/2019 - 5:05pm

The House Judiciary Committee’s Subcommittee on Antitrust held its second hearing on whether our antitrust laws and their enforcement are keeping up with the Internet marketplace. Notably, Amazon, Google, Facebook, and Apple were present as witnesses along with a range of experts to follow, giving Congress a lot to assess. EFF supports this inquiry by Congress and shares the concern that the lifecycle of competition has been on the decline, bringing serious risk of a small handful of gatekeepers having disproportionate power over user expression and privacy online.

Congress Asks the Wrong Question: “How Can We Make Big Tech Behave Better?”

A persistent theme from some in Congress is applying pressure to Google, Facebook, Apple, and Amazon to modify their practices to appease some constituency. This misses the point of an antitrust inquiry, which is to figure out if these companies are exerting market power in ways that foreclose competition, harm consumers, and suppress innovation, and then remedy the harm with thoughtful interventions. But not all industries share this objective—some would rather have the dominant Internet companies become regulated monopolies in order to further their own policy goals.

For example, major entertainment companies submitted a letter to the House Judiciary Committee claiming that Google and Facebook are not doing enough to stop copyright infringement, and attempted to shoehorn the copyright issue into an antitrust discussion. Echoing this theme, Congresswoman Mary Gay Scanlon asked the tech companies to explain their copyright enforcement practices. Google’s witness responded by discussing Content ID, their wildly expensive and poorly functioning system of copyright filters for YouTube.

The question struck a discordant note in this hearing on monopoly power. YouTube has the lion’s share of user-uploaded video on the Internet, and while Content ID causes endless difficulties for video creators and their viewers, YouTube’s vast scale makes its filtering system hard to avoid. What’s more, Content ID has cost at least $40 million to build. An aspiring competitor would find it next to impossible to build an equivalent system of filters. Making systems like Content ID the norm (or worse still, a legal requirement) cements Google’s dominance, reinforcing the exact problem the Antitrust Subcommittee has set out to fix.

Changing the subject to copyright may further the entertainment industry’s agenda, but it doesn’t address the monopoly problem. Major content companies consistently seek to narrow distribution channels for their content and then maximize control. It has no place in an antitrust inquiry, as the ultimate objective for Congress should be to open the door to more channels of distribution that can compete with YouTube. In fact, content creators would benefit from competition in distribution channels as alternatives to Google products. This would give content creators choices, and those distributors could compete for new video creations with better terms than what YouTube offers.

Congress’ Effort to Understand the Internet Marketplace Is Vital to the Internet’s Future

Chairman David Cicilline laid out the facts as to why his Subcommittee is engaging in this antitrust inquiry: a small handful of companies wield enormous influence over Internet activity and investors that are important in funding startups noted the “kill-zone” that exists when challenging the dominant tech companies. Rather than try to launch startups that were meant to displace the Internet giants, the market seems to have trended towards building companies that the tech giants will seek to acquire through vertical mergers. Professor Fiona Scott Morton testified that what may be needed is a rethinking of mergers and acquisitions. EFF agrees that this area of antitrust law is sorely in need of a reboot. Other witnesses noted that small businesses now see companies like Amazon as potentially hostile to their ability to use the Internet to grow, Stacey Mitchell of the Institute for Local Self-Reliance. This represents a dramatic shift in how Internet industries have worked historically, where the garage startup became the next billion-dollar corporation only to be replaced by the next garage startup, and no company could gatekeep others.

It’s a powerful indication that the Internet markets have changed when the giant corporations under scrutiny could only list each other as competitors, in response to requests from Congressman Hank Johnson. In a follow-up inquiry, Congressman Joe Neguse noted  4 of the 6 largest global social media companies are all owned by Facebook, which highlights how its series of mergers appeared to have gotten ahead of users who switched away from Facebook but were brought back into the fold in the end.

Lastly, an unfortunate outcome from this congressional inquiry was dismissiveness by some Members of Congress about the idea of structural separation. This is effectively saying these serious problems facing the Internet ecosystem should be approached with one hand tied behind our backs. It is quite possible that industry is opposing discussions of structural separation policies, which have a long history in antitrust law, because they might be the most effective solution. If the endgame is to avoid a regulated monopoly approach to Internet commerce, then all options have to be on the table with the purpose of empowering users to be the ultimate decision makers for the Internet’s future. It is critical that Congress get this right.  An extraordinarily valuable benefit of the Internet is how it has lowered the barriers to participation in commerce, politics, and other social activities that historically were often reserved to the powerful few.

The Design Behind EFF's New Membership Shirt

EFF - Tue, 07/16/2019 - 6:51pm

At EFF, a lot of the imagery we produce has a dystopian flavor — and with good reason.

We are currently fighting battles against increased use of facial recognition tech, IMSI-catchers, and expansive copyright laws that may punish ordinary users with thousands of dollars in fines. And when we’re not fighting the NSA’s Big Brother machine, we’re pushing Facebook to quit abusing their users’ data. Some of our bumper stickers and t-shirts even featured the slogan “FIGHT DYSTOPIA.”

But in one of our recent meetings to discuss our summer membership drive, our Development team asked itself the following question: What if we didn’t just focus on the threats to our liberty, but on the potential for a better digital future? Put another way, what will it look like when we succeed? What if we got the Internet right?

Immediately after that meeting I did a quick whiteboard sketch which I later developed into some vector artwork:

Since the Internet is made up of cats, our hero is an anthropomorphized, non gender-specific cat, who is happily engaged in various uses of technology. They are simultaneously enjoying a private conversation on their phone, engaging in protected free speech with their laptop, and expressing themselves creatively in their side gig as a DJ. Sprinkled throughout are visual references to open wireless, two-factor authentication, our Surveillance Self-Defense website, and various other EFF projects. Since the cat is an Internet freedom supporter, they get to wear a flowing superhero cape that subtly features EFF’s logo.

Just to drive the point home and for the avoidance of doubt, the slogan on the back of the shirt reads “A BETTER DIGITAL WORLD IS POSSIBLE.”

You can get your own copy of the shirt by joining or renewing your membership today. We’ve got lots of other swag that can come with your membership, including new lapel pins available through July 24. Each one represents a different step you can take today to safeguard your privacy and protect yourself online.

At EFF, we fight for the user. That means we stand with the individual, the ordinary person using technology in their everyday lives — not with the corporations and governments who so often try to use technology to limit our freedoms. We can build a better digital future, but we have to do it together. Please show your support for our vision by joining EFF today!

JOIN EFF FOR $20 THIS WEEK

Get EFF's New shirt, and build a better digital future






Hearing Thursday: EFF, ACLU Will Ask Court to Rule In Favor of Travelers Suing DHS Over Unconstitutional, Warrantless Searches of Cellphones, Laptops

EFF - Tue, 07/16/2019 - 4:30pm
Evidence Shows Fourth, First Amendment Violations

Boston, Massachusetts—On Thursday, July 18, at 3:00 p.m., lawyers for the Electronic Frontier Foundation (EFF) and the ACLU will ask a federal judge to decide that the constitutional rights of 11 travelers were violated by the suspicionless, warrantless searches of their electronic devices at the border by the U.S. government.

The plaintiffs are ten U.S. citizens and a lawful permanent resident who, like many Americans, regularly travel outside the country with their cellphones, laptops, and other electronic devices. Federal officers searched their devices at U.S. ports  of entry without a warrant or any individualized suspicion to believe that the devices contained contraband. Federal officers also confiscated the devices of four plaintiffs after they left the border, absent probable cause of criminal activity. The judge will decide whether a trial is needed or whether the evidence is so clear that the case can be decided now.

Evidence obtained in this lawsuit, Alasaad v. McAleenan,  demonstrates the unconstitutionality of the challenged searches and confiscation of traveler’s devices at the border. It also demonstrates that the plaintiffs’ have standing to bring these claims. At Thursday’s oral argument, EFF Senior Staff Attorney Adam Schwartz will address the standing issues, and Esha Bhandari, staff attorney with the ACLU’s Speech, Privacy, and Technology Project will address the merits of the claims.

WHAT:
Oral argument on summary judgment motion in Alasaad v. McAleenan

WHO:
Adam Schwartz, EFF senior staff attorney
Esha Bhandari, ACLU’s Speech, Privacy, and Technology Project

WHEN:
Thursday, July 18, 3:00 p.m.

WHERE:
John Joseph Moakley U.S. Courthouse
Courtroom 11, 5th Floor
1 Courthouse Way, Suite 2300
Boston, Massachusetts 02210

For more on this case: https://www.eff.org/cases/alasaad-v-duke

Contact:  AdamSchwartzSenior Staff Attorneyadam@eff.org

EFF Sues AT&T, Data Aggregators For Giving Bounty Hunters and Other Third Parties Access to Customers’ Real-Time Locations

EFF - Tue, 07/16/2019 - 12:55pm
Lawsuit Seeks Court Order Prohibiting Sale of Customer Location Data

SAN FRANCISCO — The Electronic Frontier Foundation (EFF) and Pierce Bainbridge Beck Price & Hecht LLP filed a class action lawsuit today on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities—including bounty hunters, car dealerships, landlords, and stalkers—to access wireless customers’ real-time locations without authorization.

An investigation by Motherboard earlier this year revealed that any cellphone user’s precise, real-time location could be bought for just $300. The report showed that carriers, including AT&T, were making this data available to hundreds of third parties without first verifying that users had authorized such access. AT&T not only failed to obtain its customers’ express consent, making matters worse, it created an active marketplace that trades on its customers’ real-time location data.

“AT&T and data aggregators have systematically violated the location privacy rights of tens of millions of AT&T customers,” said EFF Staff Attorney Aaron Mackey. “Consumers must stand up to protect their privacy and shut down this illegal market. That’s why we filed this lawsuit today.”

The lawsuit alleges AT&T violated the Federal Communications Act and engaged in deceptive practices under California’s unfair competition law, as AT&T deceived customers into believing that the company was protecting their location data. The suit also alleges that AT&T, LocationSmart, and Zumigo have violated California’s constitutional, statutory, and common law rights to privacy.

“The location data AT&T offered up for sale is extremely precise and can locate any of its wireless subscribers in real time, providing a window into the intimate details of their lives: where they go to the doctor, where they worship, where they live, and much more,” said Abbye Klamann Ognibene, an associate at Pierce Bainbridge.

“To sell this information without any notification to users is deceptive, extraordinarily invasive of their privacy, and illegal,” said Thomas D. Warren, a partner at Pierce Bainbridge.

The lawsuit, Scott, et al. v. AT&T Inc., et al., filed in the U.S. District Court of the Northern District of California, seeks money damages and an injunction against AT&T, as well as the involved location data aggregators, LocationSmart and Zumigo. The injunction would prohibit AT&T from selling customer location data and ensure that any location data already sold is returned to AT&T or destroyed. 

For the complaint:
https://www.eff.org/document/scott-v-att-geolocation-complaint

For more information about the case:
https://www.eff.org/cases/geolocation-privacy

Contact:  AaronMackeyStaff Attorneyamackey@eff.org

Knowing the “Value” of Our Data Won’t Fix Our Privacy Problems

EFF - Mon, 07/15/2019 - 3:14pm

Some lawmakers, seeking to hold companies accountable for the way they collect and profit from our personal information, are pushing a new idea: requiring companies to report a dollar value for the data they collect from us.

Some frame this reporting as a first step towards requiring companies to share with consumers the wealth our personal information generates for these companies. Yet knowing how much your personal information is worth to a company doesn’t actually protect your privacy—and the “pay for privacy” schemes that would probably follow from valuation reporting would actually harm it. What we need instead are strong laws to safeguard our privacy and prevent the reckless collection, use, and disclosure of our information.

Proposals to place a concrete dollar value on data and, by extension, on our privacy, have popped up across the country this year. Sens. Mark Warner (D-Va.) and Josh Hawley (R-Mo.) last month introduced the “Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data,” or DASHBOARD Act. It would require larger companies to report the value of customer data. Rep. Doug Collins (R-Ga.) recently proposed a bill to recognize consumer data as property. Companies pushed a bill with a similar concept in Oregon, which the ACLU of Oregon and EFF opposed, to directly pay people for the “value” of their health data as calculated by companies.

Assigning a value to your personal information might appear attractive, at first blush. Companies have grown rich off the insatiable collection of our personal information. It is tempting to demand a cut of the money they make from our clicks, our likes, and our networks of contacts.

But this is a mistake. If anything, assigning a dollar value may give the false impression that, at a value of $5, $30, or $200 for your personal information, the data collection companies’ conduct is no big deal. But a specific piece of information can be priceless in a particular context. Your location data may cost a company less than a penny to buy, yet cost you your physical safety if it falls into the wrong hands. Companies advertised lists of 1,000 people with different conditions such as anorexia, depression and erectile dysfunction for $79 per list. Such embarrassing information in the wrong hands could cost someone their job or their reputation.

Our information should not be thought of as our property this way, to be bought and sold like a widget. Privacy is a fundamental human right. It has no price tag. No person should be coerced or encouraged to barter it away. And it is definitely not a good deal for people to receive a handful of dollars in exchange for allowing companies’ invasive data collection to remain unchecked.

Privacy is a fundamental human right. It has no price tag.

No doubt companies that collect data should be more forthcoming about how they collect, share, sell, and use personal information. That’s why EFF supports laws requiring transparency about data collection, including the right to know the specific items of personal information companies have collected on you, and the specific third parties who received it—not just categorical descriptions of the general kinds of data and recipients. Users should have a legal right to obtain a copy of the data they have provided to an online service provider.

But while knowing the supposed value of your data itself is not harmful to consumers, it is difficult—if not completely impossible—to assess the true individual cost of losing control of that information.

Individuals are also poorly situated to understand the impact their single sale could have as part of a large-scale business practice—for themselves and our broader society. You may think nothing of sharing where you went to high school with a company. But the University of California-Berkeley has found that mortgage lenders, using proxy variables such as their high school, charge African American and Latinx homebuyers higher interest rates as compared with whites in similar financial situations. This research suggests the way companies use this information overcharges minority homeowners up to $500 million per year—and comes with the further social cost of dashing the dreams of between 74,000 and 1.3 million creditworthy minority applicants to own a home at all.

Associating the sale of personal information with income is also particularly dangerous for low-income families. Proponents of the Oregon pay-for-privacy bill suggested that that this would provide a new income stream for lower-income people. The truth is that pay-for-privacy creates a world where privacy is a luxury afforded only to those who can afford to protect their fundamental rights. EFF strongly opposes “pay-for-privacy” schemes, which discourage all people from exercising their right to privacy and creates classes of privacy “haves” and “have-nots.” Corporations should not be allowed to require a consumer to pay a premium, or waive a discount, in order to stop the corporation from vacuuming up and profiting from, the consumer’s personal information.

Some advocates of assigning data a monetary value also push the idea that privacy is dead—and that a “dividend” is the only way for consumers to claw back any benefit from a system designed to exploit them. In truth, we are seeing more momentum behind strong privacy legislation nationally and internationally than we have in years. We at EFF are nowhere near ready to give up on stronger privacy legislation. Nor should our country’s lawmakers.

Simply knowing the arbitrary “value” of our personal information to companies won’t fix our privacy problems. Requiring companies to respect everyone’s privacy rights—and giving individuals the power to hold those companies accountable when they don’t—will do much more. If lawmakers are serious about confronting the erosion of our privacy rights, they should set the idea of valuing data aside and instead pass strong privacy legislation.

President Trump Loses — Government Can't Block Critics on Social Media

EFF - Fri, 07/12/2019 - 3:26pm

In a long-awaited ruling, the Second Circuit has found that the replies section on President Trump’s Twitter @realDonaldTrump is a public forum and that the President cannot block his critics from reading his tweets or participating in the forum merely because his dislikes the views they express. This ruling, along with two previous federal appellate court decisions, directly affects thousands of government social media accounts across the country. Government officials and agencies who operate their social media accounts as public or non-public forums must not delete comments or block users because the officials disagree with the viewpoints expressed.

The President and his advisors had admitted earlier in the case that they blocked the plaintiffs because they disagreed with the viewpoints they expressed in their replies. As a result, the case addressed only the issue of viewpoint discrimination, and not any other reason for blocking, such as harassment.

The Second Circuit made several important findings. 

First, the court found that @realDonaldTrump is in fact controlled and maintained by the government, and used to conduct official governmental business. The Court relied on the bio in the @realDonaldTrump profile and other public statements and documented how he used the account to announce changes in his cabinet, changes in government policy positions, and even informing the public about talks with North Korea about nuclear disarmament. In finding that the blocking was state, and not private, action, the court rejected the government’s argument that the President still used the account as a private person, and found it irrelevant that he originally started the account as a private citizen.

Second, the Court found that the interactive components of @realDonaldTrump comprise a public forum, that is, a space generally open, indiscriminately, for the speech of the public, and in which viewpoint discrimination is prohibited. 

Next, the court rejected the argument that workarounds existed to allow blocked users to read and reply to the @realDonaldTrump feed in any event, finding that such workarounds nevertheless hindered the blocked users’ ability to participate in the forum, a burden on their speech that “runs afoul of the First Amendment.” 

Finally, the Second Circuit rejected the government’s argument that the entire Twitter page for @realDonaldTrump is government speech, in essence, a claim that the President is speaking by curating his reply feed to express his own message. When the government itself speaks, it may pick and choose the viewpoints it wants to express. The court found that the replies, mentions, retweets, and likes were instead the speech of the multiple users who post them, not the government. The court’s ability to distinguish the different functionalities of Twitter and social media pages is fundamental to preserving the public’s ability to speak freely as the government continues to adopt new technologies for communicating with the public.

This case sends a strong message to government officials across the country that the First Amendment will in most cases apply to the public’s interactions on the officials’ social media pages. At EFF, we’ve received numerous complaints from across the country and across the political spectrum from constituents who’ve been blocked from receiving and/or commenting upon officials’ and agencies’ posts because of political disagreements or criticism.

We’ve also seen that government use of Twitter, Facebook, and other social media platforms is widespread—and that when local and state governments start limiting who can see government messages, the consequences can be disastrous. Our briefs for this case in both the district court and Second Circuit laid out numerous examples of how government officials and agencies use social media during emergencies, and later in disaster recovery. For example, in Northern California, local fire departments have taken to Twitter to alert the public about evacuations and the control of California’s deadly wildfires. Users who are blocked risk missing out on the most up-to-date information.

EFF has also been active in other government social media cases, challenging a multitude of speech discrimination practices.

We are representing People for the Ethical Treatment of Animals (PETA) in a case against Texas A&M University regarding the school’s automatic and manual deletion of PETA’s comments and posts on the A&M Facebook page – all because A&M wanted to silence a PETA campaign to end a dog lab at A&M that breeds and experiments on dogs with muscular dystrophy. By deleting these comments, A&M was engaging in viewpoint discrimination, which is illegal in any forum, and in content discrimination, which is illegal in forums that the government specifically creates as a place for speech. We are also participating in an Oklahoma state court case, Holcomb v. Hickman, a challenge to a local official’s deletion of comments in a city Facebook group. 

EFF also submitted an amicus brief in Robinson v. Hunt Co., Texas, a case in the 5th Circuit that decided that deleting a critic’s comment on the Sheriff’s Facebook page, even though the deletion followed a policy set by the Sheriff’s office, is illegal viewpoint discrimination. 

We’re waiting to see how the government responds to the @realDonaldTrump case. The Department of Justice could appeal to the Supreme Court, but at this stage, the case law is not on the government’s side. Further, we are waiting to see if the President finally unblocks all of his critics, not just the named plaintiffs in the Knight case.

The case against President Trump isn’t the first to find that government officials cannot block their critics on social media, but it is certainly the loudest. Courts must hold all elected officials, including the President, to the same restrictions against limiting speech online that have long been upheld in the physical world. That will allow everyone to speak freely, no matter the medium. 

Adjusting the Scope of our Security Vulnerability Disclosure Program

EFF - Fri, 07/12/2019 - 3:14pm

At EFF we put security and privacy first. That's why over three years ago we launched EFF's Security Vulnerability Disclosure Program. The Disclosure Program is a set of guidelines on how security researchers can tell EFF about bugs in the software we develop, like HTTPS Everywhere or Certbot. When we launched the program, it was a bit of an experiment. After all, as a lean, member-driven nonprofit, we can't give out the tremendous cash rewards that large corporations can provide for zero days. Instead, all we can offer security researchers in return for their hard work is recognition on our EFF Security Hall of Fame page and other non-cash rewards like EFF gear or complimentary EFF memberships.

Despite the limited rewards, the program has been a tremendous success. As of June 1, 2019, we've had over seventy different security researchers report valid security vulnerabilities to us, as you can see on our Security Hall of Fame page.

Today we're making a few changes to the program based off our experience over the past three years. In particular, we're narrowing the scope of the program to focus only on specific software projects written by EFF. We still appreciate it when people report vulnerabilities in our server configurations to us, but our primary priority is the security of the software we put out into the world for anyone to use. Additionally, we're clarifying that we won't be able to provide physical rewards to people located in some areas where we've had difficulty shipping things in the past, particularly India and Pakistan. While we appreciate all the hard work of security researchers there, trying to get international shipments to those countries has put a tremendous strain on our tiny rewards team. Of course we'll still offer recognition on our Security Hall of Fame page to anyone, regardless of where on the planet they live.

Security research is a prerequisite for safe computing. We're lucky to have such a talented base of supporters and members who can donate their time to help us improve online security, so we invite you to continue helping us by inspecting, analyzing, and improving the code we write.

Visit our Security Vulnerability Disclosure Program page to view the full reporting guidelines. And don't forget to download a copy of the GPG key to use when submitting your vulnerabilities. Happy hunting!

Interoperability: Fix the Internet, Not the Tech Companies

EFF - Thu, 07/11/2019 - 4:26pm

Everyone in the tech world claims to love interoperability—the technical ability to plug one product or service into another product or service—but interoperability covers a lot of territory, and depending on what's meant by interoperability, it can do a lot, a little, or nothing at all to protect users, innovation and fairness.

Let's start with a taxonomy of interoperability:

Indifferent Interoperability

This is the most common form of interoperability. Company A makes a product and Company B makes a thing that works with that product, but doesn't talk to Company A about it. Company A doesn't know or care to know about Company B's add-on.

Think of a car's cigarette lighter: these started in the 1920s as aftermarket accessories that car owners could have installed at a garage; over time they became popular enough that they came standard in every car. Eventually, third-party companies began to manufacture DC power adapters that plugged into the lighter receptacle, drawing power from the car engine's alternator. This became widespread enough that it was eventually standardized as ANSI/SAE J563.

Standardization paved the way for a variety of innovative new products that could be made by third-party manufacturers who did not have to coordinate with (or seek permission from) automotive companies before bringing them to market. These are now ubiquitous, and you can find fishbowls full of USB chargers that fit your car-lighter receptacle at most gas stations for $0.50-$1.00. Some cars now come with standard USB ports (though for complicated reasons, these tend not to be very good chargers), but your auto manufacturer doesn't care if you buy one of those $0.50 chargers and use it with your phone. It's your car, it's your car-lighter, it's your business.

Cooperative Interoperability

Sometimes, companies are eager to have others create add-ons for their products and services. One of the easiest ways to do this is to adopt a standard: a car manufacturer that installs an ANSI/SAE J563-compliant car-lighter receptacle in its cars enables its customers to use any compatible accessory with their cars; any phone manufacturer that installs a 3.5mm headphone jack allows anyone who buys that phone to plug in anything that has a matching plug, even exotic devices like Stripe's card-readers, which convert your credit-card number to a set of tones that are played into a vendor's phone's headphone jack, to be recognized and re-encoded as numbers by Stripe's app.

Digital standards also allow for a high degree of interoperability: a phone vendor or car-maker who installs a Bluetooth chip in your device lets you connect any Bluetooth accessory with it—provided that they support that device, or at least that they make no steps to prevent that device from being connected.

This is where things get tricky: manufacturers and service providers who adopt digital standards can use computer programs to discriminate against accessories, even those that comply with the standard. This can be extremely beneficial to customers: you might get a Bluetooth "firewall" that warns you when you're connecting to a Bluetooth device that's known to have security defects, or that appears on a blacklist of malicious devices that siphon away your data and send it to identity thieves.

But as with all technological questions, the relevant question isn't merely "What does this technology do?" It's "Who does this technology do it to and who does it do it for?"

Because the same tool that lets a manufacturer help you discriminate against Bluetooth accessories that harm your well-being allows the manufacturer to discriminate against devices that harm its well-being (say, a rival's lower-cost headphones or keyboard) even if these accessories enhance your well-being.

In the digital era, cooperative interoperability is always subject to corporate boundaries. Even if a manufacturer is bound by law to adhere to a certain standard—say, to provide a certain electronic interface, or to allow access via a software interface like an API—those interfaces are still subject to limits that can be embodied in software.

A digitally enabled car-lighter receptacle could be made to support only a limited range of applications—charging via USB but not USB-C or Lightning, or only charging phones but not tablets—and software could be written to enforce those limits. Even a very permissive "smart lighter-receptacle" that accepted every known device as of today could be designed to reject any devices invented later on, unless the manufacturer chose to permit their use. A manufacturer of such a device could truthfully claim to support "every device you can currently plug into your car lighter," but still maintain a pocket veto over future devices as a hedge against new developments that it decides are bad for the manufacturer and its interests.

What's more, connected devices and services can adjust the degree of interoperability their digital interfaces permit from moment to moment, without notice or appeal, meaning that the browser plugin or social media tool you rely on might just stop working.

Which brings us to...

Adversarial Interoperability

Sometimes an add-on comes along that connects to a product whose manufacturer is outright hostile to it: third-party ink for your inkjet printer, or an unauthorized app for your iPhone, or a homebrew game for your console, or a DVR that lets you record anything available through your cable package, and that lets you store your recordings indefinitely.

Many products actually have countermeasures to resist this kind of interoperability: checks to ensure that you're not buying car parts from third parties, or fixing your own tractor.

When a manufacturer builds a new product that plugs into an existing one despite the latter's manufacturer's hostility, that's called "adversarial interoperability" and it has been around for about as long as the tech industry itself, from the mainframe days to the PC revolution to the operating system wars to the browser wars.

But as technology markets have grown more concentrated and less competitive, what was once business-as-usual has become almost unthinkable, not to mention legally dangerous, thanks to abuses of cybersecurity law, copyright law, and patent law.

Taking adversarial interoperability off the table breaks the tech cycle in which a new company enters the market, rudely shoulders aside its rivals, grows to dominance, and is dethroned in turn by a new upstart. Instead, today's tech giants show every sign of establishing a permanent, dominant position over the internet.

"Punishing" Big Tech by Granting It Perpetual Dominance

As states grapple with the worst aspects of the Internet—harassment, identity theft, authoritarian and racist organizing, disinformation—there is a real temptation to "solve" these problems by making Big Tech companies legally responsible for their users' conduct. This is a cure that's worse than the disease: the big platforms can't subject every user's every post to human review, so they use filters, with catastrophic results. At the same time, these filters are so expensive to operate that they make it impossible for would-be competitors to enter the market. YouTube has its $100 million Content ID copyright filter now, but if it had been forced to find an extra $100,000,000 to get started in 2005, it would have died a-borning.

But assigning these expensive, state-like duties to tech companies also has the perverse effect of making it much harder to spark competition through careful regulation or break-ups. Once we decide that providing a forum for online activity is something that only giant companies with enough money to pay for filters can do, we also commit to keeping the big companies big enough to perform those duties.

Interoperability to the Rescue?

It's possible to create regulation that enhances competition. For example, we could introduce laws that force companies to follow interoperability standards and oversee the companies to make sure that they're not sneakily limiting their rivals behind the scenes. This is already a feature of good telecommunications laws, and there's lots to like about it.

But a mandate to let users take their data from one company to another—or to send messages from one service to another—should be the opener, not the end-game. Any kind of interoperability mandate has the risk of becoming the ceiling on innovation, not the floor.

For example, as countries around the world broke up their national phone company monopolies, they made rules forcing them to allow new companies to use their lines, connect to their users and share their facilities, and this enabled competition in things like long distance service.

But these interoperability rules were not the last word: the telcos weren't just barred from discriminating against competitors who wanted to use their long-haul lines; thanks to earlier precedent, they were also not able to control who could make devices that plugged into those lines. This allowed companies to make modems that could connect to phone lines. As the Internet crept (and then raced) into Americans' households, the carriers had ample incentive to control how their customers made use of the net, especially as messaging and voice-over-IP eroded the massive profits from long-distance and SMS tariffs. But they couldn't, and that helplessness to steer the market let new companies and their customers create a networked revolution.

The communications revolution owes at least as much to the ability of third parties to do things that the carriers hated—but couldn't prevent—as it does to the rules that forced them to interconnect with their rivals.

Fix the Internet, Not the Tech Companies

The problems of Big Tech are undeniable: using the dominant services can be terrible, and now that they've broken the cycle of dominance and dethroning, the Big Tech companies have fortified their summits such that others dare not besiege them.

Today, much of the emphasis is on making Big Tech better by charging the companies to filter and monitor their users.

The biggest Internet companies need more legal limits on their use and handling of personal data. That’s why we support smart, thorough new Internet privacy laws. But laws that require filtering and monitoring user content make the Internet worse: more hostile to new market entrants (who can't afford the costs of compliance) and worse for Internet users' technological self-determination.

If we're worried that shadowy influence brokers are using Facebook to launch sneaky persuasion campaigns, we can either force Facebook to make it harder for anyone to access your data without Facebook's explicit approval (this assumes that you trust Facebook to be the guardian of your best interests)—or we can bar Facebook from using technical and legal countermeasures to shut out new companies, co-ops, and projects that offer to let you talk to your Facebook friends without using Facebook's tools, so you can configure your access to minimize Facebook's surveillance and maximize your own freedom.

The second way is the better way. Instead of enshrining Google, Facebook, Amazon, Apple, and Microsoft as the Internet’s permanent overlords and then striving to make them as benign as possible, we can fix the Internet by making Big Tech less central to its future.

It's possible that people will connect tools to their Big Tech accounts that do ill-advised things they come to regret. That's kind of the point, really. After all, people can plug weird things into their car's lighter receptacles, but the world is a better place when you get to decide how to use that useful, versatile ANSI/SAE J56-compliant plug—not GM or Toyota.

Life-Altering Copyright Lawsuits Could Come to Regular Internet Users Under a New Law Moving in the Senate

EFF - Wed, 07/10/2019 - 7:04pm

The Senate Judiciary Committee intends to vote on the CASE Act, legislation that would create a brand new quasi-court for copyright infringement claims. We have expressed numerous concerns with the legislation, and serious problems inherent with the bill have not been remedied by Congress before moving it forward. In short, the bill would supercharge a “copyright troll” industry dedicated to filing as many “small claims” on as many Internet users as possible in order to make money through the bill’s statutory damages provisions. Every single person who uses the Internet and regularly interacts with copyrighted works (that’s everyone) should contact their Senators to oppose this bill.

Take Action

Tell the Senate Not to Enable Copyright Trolls

Easy $5,000 Copyright Infringement Tickets Won’t Fix Copyright Law

Making it so easy to sue Internet users for allegedly infringing a copyrighted work that an infringement claim comes to resemble a traffic ticket is a terrible idea. This bill creates a situation where Internet users could easily be on the hook for multiple $5,000 copyright infringement judgments without many of the traditional legal safeguards or rights of appeal our justice system provides.

The legislation would allow the Copyright Office to create a “determination” process for claims seeking up to $5,000 in damages:

Regulations For Smaller Claims.—The Register of Copyrights shall establish regulations to provide for the consideration and determination, by at least one Copyright Claims Officer, of any claim under this chapter in which total damages sought do not exceed $5,000 (exclusive of attorneys’ fees and costs). A determination issued under this subsection shall have the same effect as a determination issued by the entire Copyright Claims Board.

This could be read as permission for the Copyright Office to dispense with even the meager procedural protections provided elsewhere in the bill when a rightsholder asks for $5000 or less. In essence, what this means is any Internet user who uploads a copyrighted work could find themselves subject to a largely unappealable $5,000 penalty without anything resembling a trial or evidentiary hearing. Ever share a meme, share a photo that isn’t yours, or download a photo you didn’t create? Under this legislation, you could easily find yourself stuck with a $5,000 judgment debt following the most trivial nod towards due process.

Every Internet User Could Face Life-Altering Money Judgments Thanks to Statutory Damages

Proponents of the legislation argue that the bill’s cap on statutory damages in a new “small claims” tribunal will protect accused infringers. But the limits imposed by the CASE Act of  $15,000 per work are far higher than the damages caps in most state small claims courts—and they don’t require any proof of harm or illicit profit. The Register of Copyrights would be free to raise that cap at any time. And the CASE Act would also remove a vital rule that protects Internet users – the registration precondition on statutory damages.

Today, someone who is going to sue a person for copyright infringement has to register their work with the Copyright Office before the infringement began, or within three months of first publication, in order to be entitled to statutory damages. Without a timely registration, violating someone’s copyright would only put an infringer on the hook for what the violation actually cost the copyright holder (called “actual damages”), or the infringer’s profits. This is a key protection for the public because copyright is ubiquitous: it automatically covers nearly every creative work from the moment it’s set down in tangible form. But not every scribble, snapshot, or notepad is eligible for statutory damages—only the ones that U.S. authors make a small effort to protect up front by filing for registration. But if Congress passes this bill, the timely registration requirement will no longer be a requirement for no-proof statutory damages of up to $7,500 per work. In other words, nearly every photo, video, or bit of text on the Internet can suddenly carry a $7,500 price tag if uploaded, downloaded, or shared even if the actual harm from that copying is nil.

For many Americans, where the median income is $57,652 per year, this $7,500 price tag for what has become regular Internet behavior would result in life-altering lawsuits from copyright trolls that will exploit this new law. That is what happens when you eliminate the processes that tend to ensure only a truly motivated copyright holder can obtain statutory damages.

Censorship of Speech Will Become More Pervasive Under this Legislation

Another major problem with the CASE Act is how it transforms a Digital Millennium Copyright Act (DMCA) Notice into a long-term censorship tool. Under current law, if a copyright holder submits a takedown notice to an online platform alleging that your post infringed their copyright, you have a right to file a counter-notice if you disagree. There are many times when false takedown claims occur on the Internet and perfectly lawful speech is suppressed, and counter-notices are an important, though flawed, check on abuse. But the CASE Act would allow a party that filed a takedown notice to also bring a claim with the new “small claims” tribunal. When they do so, the Internet platform doesn’t have to honor the counter-notice by putting the posted material back online within 14 days. Already, some of the worst abuses of the DMCA occur with time-sensitive material, as even a false infringement notice can effectively censor that material for up to two weeks during a newsworthy event, for example. The CASE Act would allow unscrupulous filers to extend that period by months, for a small filing fee.

If all these outcomes sound terrible to you and you want to send a clear message to Congress not to move forward, then you need to contact your Senators right away to tell them you oppose the CASE Act (S. 1273) and want them to oppose it on your behalf.

Take Action

Tell the Senate Not to Enable Copyright Trolls

California’s Senate Judiciary Committee Blocks Efforts to Weaken California’s Privacy Law

EFF - Wed, 07/10/2019 - 6:15pm

The California Senate Judiciary Committee heard five bills on Tuesday that EFF and other privacy advocates strongly opposed. These measures, backed by big business and the tech industry, would have eviscerated the California Consumer Privacy Act (CCPA), a landmark privacy law passed last year. We thank the Senate Judiciary Committee, in particular Chair Senator Hannah-Beth Jackson and the committee’s staff, for blocking efforts to weaken the state's baseline privacy protections.

Unfortunately, the California legislature failed to add much-needed additional protections to the CCPA this year when it blocked bills from California Senator Hannah-Beth Jackson and Assemblymember Buffy Wicks. These measures would have afforded consumers rights about how companies use their personal data, and increased their ability to exercise and enforce their rights under the CCPA. Worse, lawmakers advanced several bills that each would have weakened the CCPA on their own. Taken together, they would have significantly eroded this law, which is set to go into effect in January 2020.

Thankfully, Senate Judiciary Committee members voted down A.B. 873, which privacy advocates opposed because it would have weakened the definition of “personal information” and undermined critical privacy protections in the CCPA.

We are also pleased that Assemblymember Ken Cooley chose not to bring the most problematic of the privacy-eroding bills, A.B. 1416, up for a vote, and that it will not move forward this session. A.B. 1416 would have created an enormous loophole that would have allowed any company that sells or shares information to the government the ability to ignore your privacy rights. It faced strong opposition from privacy advocates and immigrant rights advocates.

The committee passed A.B. 25 (Asm. Ed Chau), after the author agreed to amendments that assuaged the concerns of privacy groups, employer advocates, and labor unions. The bill, originally intended to clean up implementation concerns with the CCPA, would have removed CCPA protections from data that companies collect about their employees. This bill contains a one-year sunset, with stakeholders committing to discuss employee privacy legislation more comprehensively in 2020.

The committee also spent significant time discussing the two remaining bills aimed at weakening the CCPA: A.B. 846 (Asm. Autumn Burke), which would make it easier for businesses to force consumers to pay for their privacy rights under the guise of loyalty programs, and A.B. 1564 (Asm. Mark Berman), which would make it harder for low-income Californians to exercise their privacy rights. We appreciate that Sen. Jackson negotiated with both authors to take amendments in committee on their bills that address some of our concerns. We look forward to continuing these conversations.

Finally, we thank every person who spoke up to tell the Senate Judiciary Committee and its chair to defend the basic privacy protections granted by the CCPA. We will continue the fight to improve the privacy rights of all Californians.

Don’t Be Scared: European and Chinese Software Patents Won't Hurt U.S. Businesses

EFF - Wed, 07/10/2019 - 3:03pm

A Senate subcommittee recently concluded three days of testimony about a proposed patent bill that, we have explained, would be a terrible idea. Proponents of the bill keep saying that Section 101 of  U.S. patent law, which bars patents on things like abstract ideas and laws of nature, needs to be changed. One recurring argument is that Section 101 is killing patents that are being granted in Europe and China and that somehow this hurts innovation in the U.S.

The argument is flawed for many reasons. Proponents of this bill have vastly overstated the number of Section 101 rejections. Patent applications are rejected for many different reasons. For instance, an examiner could find that an invention would have been obvious—that might lead to a Section 103 rejection. Or an examiner could find that the application simply isn’t clear at all, leading to a Section 112 rejection.

But proponents of the bill, such as former US Patent Office Director David Kappos, simply claim there’s an epidemic of Section 101 rejections by lumping all these different types of rejections together. When Joshua Landau, a patent attorney who works for a computer industry group, examined a selection of the data set that Kappos was using, he found that only 13 percent of the applications in the group were clearly Section 101 rejections.

Rejection Doesn't Have to Hurt

It’s also wrong to assume that all rejections of patent applications are bad. Rejections often lead applicants to amend their claims, usually by narrowing their scope to avoid prior art references or add clarity that gives the public more notice.  When an applicant can’t amend around an examiner’s objection, rejecting that application is good for the public. When applications are rejected, the public is protected from an undeserved monopoly. A patent holder could control a market, limit competition, and raise prices for us all.

Practically speaking, patents granted by foreign countries have little to no effect on competition and innovation in the U.S. because foreign patents have no power here. They can’t be used to stop anyone from making, using, or selling anything on U.S. soil. They can only be used to stop people from doing those things in their country of issuance.

A glut of foreign patents won’t be a threat to U.S. companies. It could even give developers and start-ups here a competitive edge over their foreign rivals. Granting more software patents and strengthening their resistance to challenge will force software and e-commerce companies, including small ones, to fend off patent licensing demands based on the threat of lawsuits. Foreign companies would have to spend less on engineers, and more on lawyers. We don’t view that as a good thing—bad patents in any country can threaten innovation globally. But it’s not useful, or accurate, to assume other countries are gaining an edge by having more patents.

So even if what we’re hearing about foreign patent practices were true, it’s hardly a reason to re-write U.S. patent law. Given the proven success of the U.S. software industry, it would only confirm the laws we have are working.

Patently Misunderstood

Finally, much of what proponents of the proposed patent bill are saying about patents in Europe and China is just factually inaccurate.

First, they claim that the European Patent Office is granting applications that would fail under Section 101 in the U.S. That’s not true. While Europe’s software patent law is written differently, it includes sections that are roughly the same, in practice, as today’s Section 101. In the U.S., patents on software are limited through Section 101, and the Supreme Court’s case law, like the Alice v. CLS Bank case.

In Europe, there is an explicit rule against patenting “mathematical methods” and “programs for computers.” That prohibition isn’t as broad as it sounds—it’s limited by guidelines allowing patents on computer programs that have a “technical character” and on artificial intelligence software that has a “technical purpose.” As a result, Europe has similar rules around patenting software—for better and for worse. The point here is, proponents are wrong that Europe grants lots of software patents that the U.S. rejects.

Second, bill proponents have said that China is granting lots of patents. That is true, but the vast majority of them are extremely low-value. Recent news reports suggest that only 23 percent of Chinese patents even cover “inventions”—the majority are for “utility models” which are often allowed to lapse after a few years. And virtually none of the applications originating in China are “triadic patents” (patents filed jointly in the patent offices of Japan, the United States, and European Union), which are widely considered “the gold standard” for patent protection.

Applicants in China are motivated at least in part by government intervention. The Chinese government provides financial and other initiatives for new patent applications—but not renewals, which is why abandonment rates are as high as 91% for design patents.

In other words, there’s no reason to think the Chinese patent system is a model the U.S. should depart from longstanding principles in order to follow more closely.

Despite what those who wish to re-write patent law’s most important parts are saying, the state of the laws in Europe and China doesn’t suggest, in any way, that the U.S. should change ours. The Tillis-Coons patent bill currently under consideration would open the door to patents on things like abstract ideas and laws of nature. That’s a terrible idea, and Congress should reject it. 

TAKE ACTION

TELL CONGRESS WE DON'T NEED MORE BAD PATENTS

Could Regulatory Backlash Entrench Facebook’s New Cryptocurrency Libra?

EFF - Wed, 07/10/2019 - 1:42pm

Facebook’s new cryptocurrency Libra has garnered attention from lawmakers and consumer groups since it was announced last month. And it’s no wonder: with a wince-inducing history of data disclosure scandals, the Facebook brand has become synonymous with ineptitude at protecting privacy. They’re bringing that tarnished reputation to cryptocurrency, a field that has already attracted more than its fair share of bad actors that too often overshadow the blockchain innovators working to protect user rights. As Congress gears up to investigate this issue, we’re frankly worried. On top of our many concerns about the implications of Libra, there is a serious possibility that reactive legislation could further harm consumers.

Poorly-crafted laws today could chill innovation tomorrow.

We’ve criticized Facebook for years, and we share the concerns of regulators who want to ensure people’s privacy and rights are protected from Facebook’s abuses. But make no mistake: a disproportionate regulatory backlash to Libra could have dire consequences for Internet users. Legislation that tries to ban the publication of open source software, impose onerous licensing obligations on creators developing code, or which attempts to regulate non-custodial blockchain services as if they were banks will have a chilling effect on innovation in the space. The end result would be that the only companies able to navigate the complicated regulatory landscape are those with significant financial and legal resources. In other words, regulatory backlash today could serve to entrench Facebook’s role in the space rather than unseat it.

Libra is a cryptocurrency set to launch in 2020 which, according to its whitepaper, seeks to “enable a simple global currency and financial infrastructure that empowers billions of people.” Like other blockchain-based projects, Libra uses cryptographic hashes to create a decentralized ledger that is extremely difficult to edit, meaning its history of transactions is immutable under most circumstances. This is one of the cornerstone innovations of blockchain technology: since the technology is designed in a way that removing or editing a prior transaction is extremely difficult, users can trust that once they have received a digital token it is truly theirs and can’t be transferred to someone else without their permission. Unlike the decentralized but computationally-intensive Bitcoin mining system, Libra uses a set of trusted validators to verify and attest to the transactions on Libra (rejecting transactions that would fraudulently attempt to double spend coins that have already been spent elsewhere, for example). Libra is also unusual in that it is a so-called stable coin, and is designed to see less fluctuations in value because the Libra Association intends to be “fully backing each coin with a set of stable and liquid assets.

Facebook also announced Calibra, a new digital wallet for Libra. This wallet, which Facebook intends to make available in its other products like Messenger and WhatsApp, will enable users to send Libra to one another through the apps, and potentially to merchants as well.

There’s a lot being written about Libra and its wallet Calibra. For users concerned about their digital rights, there are a few things to focus on:

  • New innovations around transactional privacy are not part of Libra’s design. Whereas the first Bitcoin whitepaper proposed a ledger that would be pseudonymous but publicly accessible, new innovations in applied cryptography have found ways to successfully verify transactions and maintain a distributed ledger without disclosing the sending account, the receiving account, and the amount sent. However, Facebook has not chosen to integrate these privacy features into the Libra protocol.
  • The key features of decentralization that are the hallmark of Bitcoin and numerous other blockchain technologies were not built into Libra’s design. Many people have criticized Bitcoin for using a lot of energy in order to verify transactions. Alternatives to an open and permissionless blockchain give up some level of decentralization in order to be more efficient in energy consumption, transaction speed, and scalability. Similarly, Libra’s design choice creates a set of trusted intermediary organizations who communicate to reach consensus on which transactions are valid. These verifiers could, in theory, collude or be legally compelled to block or delay transactions in ways users might not expect. They could also collectively decide to roll back transaction history that they decided represented improper activity.
  • Calibra, the Facebook wallet for Libra, is a custodial wallet. The user doesn’t control the wallet’s security; the Calibra service does. That means that Calibra can see and interfere with transactions, block accounts from receiving funds, remove funds from user wallets, disclose users’ activities to law enforcement and governments, and can freeze or cancel accounts altogether. Calibra has put out an initial statement (PDF accessed from this URL on July 10; crossposted here for archival purposes) about their commitments to consumers that describes efforts it will implement to monitor the accounts of users and report on those activities to governments. Unfortunately, this document doesn’t promise much transparency or accountability about the data that will flow to governments. 
  • In addition, while Calibra cites the millions of unbanked people around the world in its press release, it notes that it intends to take steps to identify customers by requiring ID verification as well as using “the latest technologies and techniques, such as machine learning” to enhance its efforts to identify customers and report on their activities to governments.

A scalable, easy-to-use cryptocurrency built into popular global applications could expand the reach of cryptocurrencies. But Libra and Calibra aren’t currently in alignment with the ideas that make cryptocurrencies exciting from a digital rights perspective. Instead, we are promised a system that will be hardly more privacy-protective than Venmo (which trumpets the sexual encounters and drug purchases of users) and will have the same censorship tools as PayPal—which has a long history of freezing accounts with little explanation or option for appeal.

With Calibra rolling out to put Libra into the smartphones of people across the globe, we could soon see Libra itself become something akin to a default form of money for the Internet. Even if another wallet could rival Calibra’s functionality and ease of user interface, how many users will actually download a new app when Instagram, Whatsapp, and Facebook Messenger are already on their phones?

As Bloomberg’s Matt Levine points out, “if you replace the traditional social-regulatory technology of money creation with a new sort of computer technology of money creation, odds are that the power of money creation will end up not so much in the hands of free-spirited individual hackers around the world, but in the hands of some giant tech company.”

The advent of Libra has also brought increased regulatory scrutiny to this topic, and it’s no wonder: lawmakers were already thinking through how to hold Facebook accountable for its repeated abuses of people’s privacy rights. But short-sighted regulatory reactions today could prove a double harm to consumers, who not only will be living in a world where privacy-eroding Libra becomes increasingly used but in which onerous and technologically-specific regulations make better cryptocurrency alternatives less likely.

Early regulatory battles in this space have provided a glimpse into what a regulatory reaction might look like. Recently, the UK Treasury sought feedback as it updated its anti-money laundering regulations. Included in the feedback requests were questions about whether the publication of open source code should be regulated as a way to crack down on bad actors using cryptocurrency. The UK isn’t alone; a few months earlier, we saw similar worrisome language about punishing people for merely writing and publishing code in statements put out by the SEC related to their settlement with a decentralized exchange.

Regulation could still harm consumers even if it didn’t go as far as banning the publication of open source software. In May, a member of Congress called for a bill to outlaw cryptocurrency purchases by Americans, a move that would hamper future innovation. We’ve seen worrisome proposals like the original New York BitLicense, which sought to require that innovators go through a costly and lengthy licensing process and had implications for users as well as innovators. Poorly-crafted legislation could impose privacy-invasive regulations in the name of combating money laundering that could undermine new cryptocurrency technologies like ZCash and Monero, which endeavor to provide digital currency users with some of the same financial privacy enjoyed by anyone who uses cash.

Some regulators have recognized that their initial reactions to blockchain technologies were too severe. For example, a U.S. Commodity Futures Trading Commission (CFTC) Commissioner publicly stated his view that smart contract developers should be accountable when they could “reasonably foresee” that people would later use their code in a way that violated the law. Just four months later, after discussions with the blockchain community, he retracted that position.

Poorly-crafted laws today could chill innovation tomorrow. The end result of this would be that only companies that are rich enough to hire an army of lawyers and lobbyists will be able to wade through the complicated regulatory landscape—companies such as Facebook.

This is not to say that any regulation around cryptocurrency would be a bad idea. Those who abuse cryptocurrency to defraud consumers should be held accountable, and EFF participated in the United States Uniform Law Commission’s process for considering how to approach regulation in a way that could crack down on fraudsters without sacrificing future innovation. We would urge any policymakers moving into this space to similarly engage human rights and technology experts.

We’ve identified a few rules of the road for any policymakers stepping into this space to ensure they minimize the harm to consumers. Any regulation around blockchain:

  • Should be technologically neutral.
  • Should not punish those who merely write and publish code.
  • Should provide protections for individual miners, merchants who accept cryptocurrencies, and individuals who trade in cryptocurrency as consumers.
  • Should focus on custodial services, not non-custodial services that can’t trade assets without user participation.
  • Should provide an adequate on-ramp for new services to reach compliance.
  • Should recognize the human right to privacy and the deeply personal nature of financial transactions, and should not undermine privacy-enhancing innovation in this space.
  • Should recognize the important role of decentralized exchanges and other decentralized technologies in empowering consumers.
  • Should not chill future technological innovation that will benefit consumers.

Adhering to these principles while also attempting to pass a law through Congress—with all the concomitant pressure from the lobbyists of big tech companies and the traditional financial services—is no simple feat, especially when many in Congress may be unaware of the intricacies of how the technology works.

Lawmakers that were already unsympathetic to cryptocurrency’s freewheeling evolution are not going to be reassured by a currency associated with Congress’ least favorite company. But we urge them not to let their suspicion of Mark Zuckerberg lead them to restraining the many other innovators eager to compete with him.

A Better Digital Future Is Possible

EFF - Wed, 07/10/2019 - 12:01pm

It’s EFF’s 29th birthday, and we need you to help us celebrate! For two weeks only, become an EFF member for just $20 and get a set of Internet freedom-themed enamel pins to help you remember that together we’ve got this.

Each summer we renew EFF’s promise to defend the Internet’s incredible ability to power communities and movements, and we reflect on our commitment to fight for a world where tech puts users at the center. Not corporations, not advertisers, and not governments. You.

You can help us build that future by becoming an EFF member today.

Join EFF Today!

You play a crucial role in ensuring technology has a positive effect. Each of our nifty enamel pins is a reminder that your small actions can have a big impact on making the Internet safer: use an encrypted messaging app

The Privacy and Civil Liberties Oversight Board Signals It Will investigate NSA Surveillance, Facial Recognition, and Terror Watchlists

EFF - Mon, 07/08/2019 - 3:19pm

After a long dormant stretch, the Privacy and Civil Liberties Oversight Board (PCLOB) has signaled it’s ready to tackle another big review of government surveillance and overreach. The PCLOB, an independent agency in the executive branch, last published a 2014 report on warrantless surveillance of the Internet by the U.S. intelligence community. While EFF welcomes the PCLOB’s efforts to bring oversight and transparency to the most controversial surveillance programs, we’ve disagreed with some of the Board’s findings, particularly on surveillance under FISA Section 702. So while it’s a good sign that the board is turning its attention to other major issues, its mixed history means it may be a little too soon to get your hopes up.  

 This week, the board, which was created after a recommendation from the 9/11 Commission to look into the violation of civil liberties, released a strategic plan [PDF] that does not shy away from investigating some of the biggest threats to privacy in the U.S. According to the document, they will be looking into the NSA’s collection of phone records, facial recognition and other biometric technologies being used in airport security, the processes that govern terrorist watchlist, what they call “deep dive” investigations into NSA’s XKEYSCORE tool and the CIA’s counterterrorism activity, as well as many other government programs and procedures.

It’s hard to say what the possible results of these inquiries can or will be. The PCLOB has the right to look into classified materials, as well as request written subpoenas from the Attorney General. In the past, however, PCLOB has been incredibly measured in their critique of mass surveillance programs. Their 2014 report, for instance, found that the Section 702 program is sound “at its core,” and provides “considerable value” in the fight against terrorism—despite going on to make ten massive recommendations for what the program must do to avoid infringing on people’s privacy. In other words, while finding serious privacy concerns with a program that vacuums up an untold number of Americans’ phone data, it still approved of its existence.

At the very least, we should expect detailed reports out of these investigations that detail exactly how and why our privacy is being trampled. Despite the fact that any reports likely wouldn’t be out for over a year, these reports could become valuable evidence as Congress considers future legislation, especially provisions like Section 702 that regularly come up for amendment and reauthorization.

Related Cases: Jewel v. NSA

Media Briefing Monday: EFF and Partners Will Discuss California Bills Aimed at Weakening State’s Consumer Privacy Law

EFF - Fri, 07/05/2019 - 2:55pm
Lawmakers Carrying Water For Tech Companies Opposed to Strong Privacy Protections

San Francisco—On Monday, June 8, at 11 am, the Electronic Frontier Foundation (EFF), the ACLU, Common Sense Media, Privacy Rights Clearinghouse, and Consumer Reports will hold a conference call to brief reporters about five bills designed to weaken consumer privacy protections that are set for hearing in the California Senate.

Members of the media can RSVP to Stephanie Ong, song@commonsense.org, for call-in information to participate in the briefing.

Experts from the organizations will brief reporters on how the proposed measures will water down the California Consumer Protection Act (CCPA). The proposals, backed by Big Tech interests, will, among other things, create a massive loophole for companies that sell or share information with the government, remove most CCPA protections over data collected by companies about their employees, and increase the cost for Californians to assert privacy rights. The bills are set for hearing June 9 at 1:30 pm before the Senate Judiciary Committee.

Experts on the call include:
Lee Tien, EFF Senior Staff Attorney
Hayley Tsukayama, EFF Legislative Activist
Elizabeth Gettelman Galicia, Vice President, Common Sense Kids Action
Ariel Fox Johnson, Senior Counsel, Policy & Privacy, Common Sense Media
Emory Roane, Privacy Counsel, Privacy Rights Clearinghouse
Jacob Snow, Technology and Civil Liberties Attorney, ACLU

WHAT:
Media Briefing Call

WHO:
EFF, ACLU, Common Sense Media, Consumer Reports, Privacy Rights Clearinghouse

WHEN:
Monday, June 8, 11 am
RSVP to Stephanie Ong, song@commonsense.org, for call-in information.

For more about the bills:
https://www.eff.org/deeplinks/2019/04/california-assemblys-privacy-committee-votes-weaken-landmark-privacy-law

Contact:  HayleyTsukayamaLegislative Activisthayleyt@eff.org Rebecca FarmerACLU of Northern Californiarfstrategies@gmail.com StephanieOngCommon Sense Mediasong@commonsense.org

Pages