Electronic Freedom Foundation

Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life

EFF - 4 hours 47 min ago

Apple has announced impending changes to its operating systems that include new “protections for children” features in iCloud and iMessage. If you’ve spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security.

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

When Apple releases these “client-side scanning” functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers.

Apple Is Opening the Door to Broader Abuses

We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.

That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of “misinformation” in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.

Image Scanning on iCloud Photos: A Decrease in Privacy

Apple’s plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft’s PhotoDNA. The main product difference is that Apple’s scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold.

Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user’s account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned.

Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement.

Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users’ content.

Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption

Apple’s second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect “sexually explicit images.” According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.

Similarly, if the under-13 child receives an image that iMessage deems to be “sexually explicit”, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent’s device.

This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.

Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

These notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.

It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid. These filters have a history of chilling expression, and there’s plenty of reason to believe that Apple’s will do the same.

Since the detection of a “sexually explicit image” will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage “end-to-end encrypted.” Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the “end-to-end” promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company’s stance toward strong encryption.

Whatever Apple Calls It, It’s No Longer Secure Messaging

As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that’s not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users’ devices or send notifications to a wider audience, easily censoring and chilling speech.

But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet’s potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware.

People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users’ devices.

We Have Questions for DEF CON's Puzzling Keynote Speaker, DHS Secretary Mayorkas

EFF - 6 hours 9 min ago

The Secretary of Homeland Security Alejandro Mayorkas will be giving a DEF CON keynote address this year. Those attending this weekend’s hybrid event will have a unique opportunity to “engage” with the man who heads the department responsible for surveillance of immigrants, Muslims, Black activists, and other marginalized communities. We at EFF, as longtime supporters of the information security community and having stood toe-to toe with government agencies including DHS, have thoughts on areas where Secretary Mayorkas must address digital civil liberties and human rights. So we thought it prudent to suggest some questions you might ask.

If you’re less than optimistic about getting satisfying answers to these from the Secretary, here are some organizations who are actively working to protect the rights of people targeted by the Department of Homeland Security:


Learn more about EFF's virtual participation in DEF CON 29 including EFF Tech Trivia, our annual member shirt puzzle, and a special presentation with author and Special Advisor Cory Doctorow.

16 Civil Society Organizations Call on Congress to Fix the Cryptocurrency Provision of the Infrastructure Bill

EFF - 6 hours 17 min ago

The Electronic Frontier Foundation, Fight for the Future, Defending Rights and Dissent and 13 other organizations sent a letter to Senators Charles Schumer (D-NY), Mitch McConnell (R-KY), and other members of Congress asking them to act swiftly to amend the vague and dangerous digital currency provision of Biden’s infrastructure bill.

The fast-moving, must-pass legislation is over 2,000 pages and primarily focused on issues such as updating America’s highways and digital infrastructure. However, included in the “pay-for” section of the bill is a provision relevant to cryptocurrencies that includes a new, vague, and expanded definition of what constitutes a “broker” under U.S. tax law. As EFF described earlier this week, this vaguely worded section of the bill could be interpreted to mean that many actors in the cryptocurrency space—including software developers who merely write and publish code, as well as miners who verify cryptocurrency transactions—would suddenly be considered brokers, and thus need to collect and report identifying information on their users.

In the wake of heated opposition from the technical and civil liberties community, some senators are taking action. Senators Wyden, Loomis, and Toomey have introduced an amendment that seeks to ensure that some of the worst interpretations of this provision are excluded. Namely, the amendment would seek to clarify that miners, software developers who do not hold assets for customers, and those who create hardware and software to support consumers in holding their own cryptocurrency would not be implicated under the new definition of broker.

We have already seen how digital currency supports independent community projects, routes around financial censorship, and supports independent journalists around the world. Indeed, the decentralized nature of digital currency is allowing cryptographers and programmers to experiment with more privacy-protective exchanges, and to offer alternatives for those who wish to protect their financial privacy or those who have been subject to financial censorship

The privacy rights of cryptocurrency users is a complex topic. Properly addressing such an issue requires ample opportunity for civil liberties experts to offer feedback on proposals. But there has been no opportunity to do that in the rush to fund this unrelated bill. That’s why the coalition that sent the letter—which includes national groups and local groups representing privacy advocates, journalists, technologists, and cryptocurrency users—shares a common concern about this provision's push to run roughshod over this nuanced issue.

The Wyden-Lummis-Toomey Amendment removes reporting obligations from network participants who don’t have, and shouldn’t have, access to customer information. It does so without affecting the reporting obligations placed on brokers and traders of digital assets.

Read full letter here: https://www.eff.org/document/civil-society-letter-wyden-lummis-and-toomey-amendment-cryptocurrency-provisions

Utilities Governed Like Empires

EFF - Tue, 08/03/2021 - 5:06pm
Believe the hype

After decades of hype, it’s only natural for your eyes to skate over corporate mission-statements without stopping to take note of them, but when it comes to ending your relationship with them,  tech giants’ stated goals take on a sinister cast.

Whether it’s “bringing the world closer together” (Facebook), “organizing the world’s information” (Google), to be a market “where customers can find and discover anything they might want to buy online” (Amazon) or “to make personal computing accessible to each and every individual” (Apple), the founding missions of tech giants reveal a desire to become indispensable to our digital lives.

They’ve succeeded. We’ve entrusted these companies with our sensitive data, from family photos to finances to correspondence. We’ve let them take over our communities, from medical and bereavement support groups to little league and service organization forums. We’ve bought trillions of dollars’ worth of media from them, locked in proprietary formats that can’t be played back without their ongoing cooperation.

These services often work great...but they fail very, very badly. Tech giants can run servers to support hundreds of millions or billions of users - but they either can’t or won’t create equally user-centric procedures for suspending or terminating those users.

But as bad as tech giants’ content removal and account termination policies are, they’re paragons of sense and transparency when compared to their appeals processes. Many who try to appeal a tech company’s judgment quickly find themselves mired in a Kafkaesque maze of automated emails (to which you often can’t reply), requests for documents that either don’t exist or have already been furnished on multiple occasions, and high-handed, terse “final judgments” with no explanations or appeal.

The tech giants argue that they are entitled to run their businesses largely as they see fit: if you don’t like the house rules, just take your business elsewhere. These house rules are pretty arbitrary: platforms’ public-facing moderation policies are vaguely worded and subject to arbitrary interpretation, and their account termination policies are even more opaque. 

Kafka Was An Optimist

All of that would be bad enough, but when it is combined with the tech companies’ desire to dominate your digital life and become indispensable to your daily existence, it gets much worse.

Losing your cloud account can cost you decades of your family photos. Losing access to your media account can cost you access to thousands of dollars’ worth of music, movies, audiobooks and ebooks. Losing your IoT account can render your whole home uninhabitable, freezing the door locks while bricking your thermostat, burglar alarm and security cameras. 

But really, it’s worse than that: you will incur multiple losses if you get kicked off just one service. Losing your account with Amazon, Google or Apple can cost you access to your home automation and security, your mobile devices, your purchased ebooks/audiobooks/movies/music, and your photos. Losing your Apple or Google account can cost you decades’ worth of personal correspondence - from the last email sent by a long-dead friend to that file-attachment from your bookkeeper that you need for your tax audit. These services are designed to act as your backup - your offsite cloud, your central repository - and few people understand or know how to make a local copy of all the data that is so seamlessly whisked from their devices onto big companies’ servers.

In other words, the tech companies set out to make us dependent on them for every aspect of our online lives, and they succeeded - but when it comes to kicking you off their platforms, they still act like you’re just a bar patron at last call, not someone whose life would be shattered if they cut you off.

YouTubers Warned Us

This has been brewing for a long time. YouTubers and other creative laborers have long suffered under a system where the accounts on which they rely to make their livings could be demonetized, suspended or deleted without warning or appeal. But today, we’re all one bad moderation call away from having our lives turned upside-down.

The tech giants’ conquest of our digital lives is just getting started. Tech companies want to manage our health, dispense our medication, take us to the polls on election day, televise our political debates and teach our kids. Each of these product offerings comes with grandiose pretensions to total dominance - it’s not enough for Amazon Pharmacy to be popular, it will be the most popular, leveraging Amazon’s existing business to cut off your corner druggist’s market oxygen (Uber’s IPO included a plan to replace all the world’s public transit and taxi vehicles with rideshares). 

If the tech companies deliver on their promises to their shareholders, then being locked out of your account might mean being locked out of whole swathes of essential services, from buying medicine to getting to work.

Well, How Did We Get Here?

How did the vibrant electronic frontier become a monoculture of “five websites, each consisting of screenshots of text from the other four?” 

It wasn’t an accident. Tech, copyright, contract and competition policy helped engineer this outcome, as did VCs and entrepreneurs who decided that online businesses were only worth backing if they could grow to world-dominating scale.

Take laws like Section 1201 of the Digital Millennium Copyright Act, a broadly worded prohibition on tampering with or removing DRM, even for lawful purposes. When Congress passed the DMCA in 1998, they were warned that protecting DRM - even when no copyright infringement took place - would leave technology users at the mercy of corporations. You may have bought your textbooks or the music you practice piano to, but if it’s got DRM and the company that sold it to you cuts you off, the DMCA does not let you remove that DRM (say goodbye to your media). 

Companies immediately capitalized upon this dangerously broad law: they sold you media that would only play back on the devices they authorized. That locked you into their platform and kept you from defecting to a rival, because you couldn’t take your media with you. 

But even as DRM formats proliferated, the companies that relied on them continued to act like kicking you off their platforms was like the corner store telling you to buy your magazines somewhere else - not like a vast corporate empire of corner stores sending goons  to your house to take back every newspaper, magazine and paperback you ever bought there, with no appeal.

It’s easy to see how the DMCA and DRM give big companies far-reaching control over your purchases, but other laws have had a similar effect. The Computer Fraud and Abuse Act (CFAA), another broadly worded mess of a law, is so badly drafted that tech companies were able to claim for decades that simply violating their terms of service could be  a crime - a chilling claim that was only put to rest by the Supreme Court this summer.

From the start, tech lawyers and the companies they worked for set things up so that most of the time, our digital activities are bound by contractual arrangements, not ownership. These are usually mass contracts, with one-sided terms of service. They’re end user license agreements that ensure that the company has a simple process for termination without any actual due process, much less strong remedies if you lose your data or the use of your devices.  

CFAA, DMCA, and other rules allowing easy termination and limiting how users and competitors could reconfigure existing technology created a world where doing things that displeased a company’s shareholders could literally be turned into a crime - a kind of “felony contempt of business-model.” 

These kinds of shady business practices wouldn’t have been quite so bad if there were a wide variety of small firms that allowed us to shop around for a better deal. 

Unfortunately, the modern tech industry was born at the same moment as American antitrust law was being dismantled - literally. The Apple ][+ appeared on shelves the same year Ronald Reagan hit the campaign trail. After winning office, Reagan inaugurated a 40-year, bipartisan project to neuter antitrust law, allowing incumbents to buy and crush small companies before they could grow to be threats; letting giant companies merge with their direct competitors, and looking the other way while companies established “vertical monopolies” that controlled their whole supply chains.

Without any brakes, the runaway merger train went barrelling along, picking up speed. Today’s tech giants buy companies more often than you buy groceries, and it has turned the tech industry into a “kill-zone” where innovative ideas go to die.

How is it that you can wake up one day and discover you’ve lost your Amazon account, and get no explanation? How is that this can cost you the server you run your small business on, a decade of family photos, the use of your ebook reader and mobile phone, and access to your entire library of ebooks, movies and audiobooks? 


Amazon is in so many parts of your life because it was allowed to merge with small competitors, create vertical monopolies, wrap its media with DRM - and never take on any obligations to be fair or decent to customers it suspected of some unspecified wrongdoing. 

Not just Amazon, either - every tech giant has an arc that looks like Amazon’s, from the concerted effort to make you dependent on its products, to the indifferent, opaque system of corporate “justice” governing account termination and content removal.

Fix the Tech Companies

Companies should be better. Moderation decisions should be transparent, rules-based, and follow basic due process principles. All of this - and more - has been articulated in detail by an international group of experts from industry, the academy, and human rights activism, in an extraordinary document called The Santa Clara Principles. Tech companies should follow these rules when moderating content, because even if they are free to set their own house rules, the public has the right to tell them when those rules suck and to suggest better ones.

If a company does kick you off its platform - or if you decide to leave - they shouldn’t be allowed to hang onto your data (or just delete it). It’s your data, not theirs. The concept of a “fiduciary” - someone with a duty to “act in good faith” towards you - is well-established. If you fire your lawyer (or if they fire you as a client), they have to give you your files. Ditto your doctor or your mental health professional. 

Many legal scholars have proposed creating “information fiduciary” rules that create similar duties for firms that hold your data. This would impose a “duty of loyalty” (to act in the best interests of their customers, without regard to the interests of the business), and a “duty of care” (to act in the manner expected by a reasonable customer under the circumstances). 

Not only would this go a long way to resolving the privacy abuses that plague our online interactions - it would also guarantee you the right to take your data with you when you left a service, whether that departure was your idea or not. 

Information fiduciary isn’t the only way to get companies to be responsible. Direct consumer protection laws -- such as requiring companies to make your content readily available to you in the event of termination -- could too (there are other approaches as well).  How these rules would apply would depend on the content they host as well as the size of the business you’re dealing with - small companies would struggle to meet the standards we’d expect of giant companies. But every online service should have some duties to you - if the company that just kicked you off its servers and took your wedding photos hostage is a two-person operation, you still want your pictures back!

Fix the Internet

Improving corporate behavior is always a laudable goal, but the real problem with giant companies that are entwined in your life in ways you can’t avoid isn’t that those companies wield their incredible power unwisely. It’s that they have that power in the first place.

To give power to internet users, we have to take it away from giant internet companies. The FTC - under new leadership - has pledged that it will end decades of waving through anticompetitive mergers. That’s just for openers, though. Competition scholars and activists have made the case for the harder task of  breaking up the giants, literally cutting them down to size.

But there’s more.  Congress is considering the ACCESS Act, landmark legislation that would force the largest companies to interoperate with privacy-respecting new rivals, who’d be banned from exploiting user data. If the ACCESS Act passes, it will dramatically lower the high switching costs that keep us locked into big platforms even though we don’t like the way they operate. It also protects folks who want to develop tools to make it easier for you to take your data when you leave, whether voluntarily or because your account is terminated. 

That’s how we’ll turn the internet back into an ecosystem of companies, co-ops and nonprofits of every size that can take receipt of your data, and offer you an online base of operations from which you can communicate with friends, communities and customers regardless of whether they’re on the indieweb or inside a Big Tech silo.

That still won’t be enough, though. The fact that terms of service, DRM, and other technologies and laws can prevent third parties from supplying software for your phone, playing back the media you’ve bought, and running the games you own still gives big companies too much leverage over your digital life.

That’s why we need to restore the right to interoperate, in all its guises: competitive compatibility (the right to plug new products and services into existing ones, with or without permission from their manufacturers), bypassing DRM (we’re suing to make this happen!), the right to repair (a fight we’re winning!) and an end to abusive terms of service (the Supreme Court got this one right).

Digital Rights are Human Rights

When we joined this fight,  30 long years ago, very few people got it. Our critics jeered at the very idea of “digital rights” - as if the nerdfights over Star Trek forums could somehow be compared to history’s great struggles for self-determination and justice! Even a decade ago, the idea of digital rights was greeted with jeers and skepticism.

But we didn’t get into this to fight for “digital rights” - we’re here to defend human rights. The merger of the “real world” and the “virtual world” could be argued over in the 1990s, but not today, not after a lockdown where the internet became the nervous system for the planet, a single wire we depended on for free speech, a free press, freedom of assembly, romance, family, parenting, faith, education, employment, civics and politics.

Today, everything we do involves the internet. Tomorrow, everything will require it. We can’t afford to let our digital citizenship be reduced to a heavy-handed mess of unreadable terms of service and broken appeals processes.

We have the right to a better digital future - a future where the ambitions of would-be monopolists and their shareholders take a back-seat to fairness, equity, and your right to self-determination.

Flex Your Power. Own Your Tech.

EFF - Tue, 08/03/2021 - 12:26pm

Before advanced computer graphics, a collection of clumsy pixels would represent ideas far more complex than technology could capture on its own. With a little imagination, crude blocks on a screen could transform into steel titans and unknown worlds. It’s that spirit of creativity and vision that we celebrate each year at the Las Vegas hacker conferences—BSidesLV, Black Hat, and DEF CON—and beyond.

The Electronic Frontier Foundation has advised tinkerers and security researchers at conferences like these for decades because human ingenuity is faster and hungrier than the contours of the law. Copyright, patent, and hacking statutes often conflict with legitimate activities for ordinary folks, driving EFF to help fill the all-too-common gap in people's understanding of technology and the law. Thankfully, support from the public has allowed EFF to continue leading efforts to even the playing field for everyone. It brings us all closer to EFF's ambitious, and increasingly urgent, view of the future: one where creators keep civil liberties and human rights at the center of technology. You can help us build that future as an EFF member.

Stand with EFF

Join EFF and Protect Online Freedom

In honor of this week's hacker conferences, EFF’s annual mystery-filled DEF CON t-shirt is available to everyone, but it won’t last long! Our DC29 Pixel Mech design is a reminder that simple ideas can have colossal potential. Like previous years' designs, there's more than meets the eye.

EFF members' commitment to tech users and creators is more necessary each day. Together we've been able to develop privacy-enhancing tools like Certbot to encrypt more of the web; work with policymakers to support fiber broadband infrastructure; beat back dangerous and invasive public-private surveillance partnerships; propose user-focused solutions to big tech strangleholds on your free expression and consumer choice; and rein in oppressive tech laws like the CFAA, which we just fought, and won, in U.S. Supreme Court.

Just as a good hacker sees worlds of possibility in plastic, metal, and pixels, we must all envision and work for a future that’s better than what we’re given. It doesn't matter whether you're an OG cyberpunk phreaker or you just enjoy checking out the latest viral dance moves: we all benefit from a web that empowers users and supports curiosity and creativity online. Support EFF's vital work this year!

Viva Las Vegas, wherever you are.


EFF is a U.S. 501(c)(3) nonprofit with a top rating from Charity Navigator. Your gift is tax-deductible as allowed by law. You can even support EFF all year with a convenient monthly donation!

The Cryptocurrency Surveillance Provision Buried in the Infrastructure Bill is a Disaster for Digital Privacy

EFF - Mon, 08/02/2021 - 5:35pm

The forthcoming Senate draft of Biden's infrastructure bill—a 2,000+ page bill designed to update the United States’ roads, highways, and digital infrastructure—contains a poorly crafted provision that could create new surveillance requirements for many within the blockchain ecosystem. This could include developers and others who do not control digital assets on behalf of users.

While the language is still evolving, the proposal would seek to expand the definition of “broker” under section 6045(c)(1) of the Internal Revenue Code of 1986 to include anyone who is “responsible for and regularly providing any service effectuating transfers of digital assets” on behalf of another person. These newly defined brokers would be required to comply with IRS reporting requirements for brokers, including filing form 1099s with the IRS. That means they would have to collect user data, including users’ names and addresses.

The broad, confusing language leaves open a door for almost any entity within the cryptocurrency ecosystem to be considered a “broker”—including software developers and cryptocurrency startups that aren’t custodying or controlling assets on behalf of their users. It could even potentially implicate miners, those who confirm and verify blockchain transactions. The mandate to collect names, addresses, and transactions of customers means almost every company even tangentially related to cryptocurrency may suddenly be forced to surveil their users. 

How this would work in practice is still very much an open question. Indeed, perhaps this extremely broad interpretation was not even the intent of the drafters of this language. But given the rapid timeline for the bill’s likely passage, those answers may not be resolved before it hits the Senate floor for a vote.

Some may wonder why an infrastructure bill primarily focused on topics like highways is even attempting to address as complex and evolving a topic as digital privacy and cryptocurrency. This provision is actually buried in the section of the bill relevant to covering the costs of the other proposals. In general, bills that seek to offer new government services must explain how the government will pay for those services. This can be done through increasing taxes or by somehow improving tax compliance. The cryptocurrency provision in this bill is attempting to do the latter. The argument is that by engaging in more rigorous surveillance of the cryptocurrency community, the Biden administration will see more tax revenue flow in from this community without actually increasing taxes, and thus be able to cover $28 billion of its $2 trillion infrastructure plan. Basically, it’s presuming that huge swaths of cryptocurrency users are engaged in mass tax avoidance, without providing any evidence of that.

Make no mistake: there is a clear and substantial harm in ratcheting up financial surveillance and forcing more actors within the blockchain ecosystem to gather data on users. Including this provision in the infrastructure bill will: 

  • Require new surveillance of everyday users of cryptocurrency;
  • Force software creators and others who do not custody cryptocurrency for their users to implement cumbersome surveillance systems or stop offering services in the United States;
  • Create more honeypots of private information about cryptocurrency users that could attract malicious actors; and
  • Create more legal complexity to developing blockchain projects or verifying transactions in the United States—likely leading to more innovation moving overseas.

Furthermore, it is impossible for miners and developers to comply with these reporting requirements; these parties have no way to gather that type of information. 

The bill could also create uncertainty about the ability to conduct cryptocurrency transactions directly with others, via open source code (e.g. smart contracts and decentralized exchanges), while remaining anonymous. The ability to transact directly with others anonymously is fundamental to civil liberties, as financial records provide an intimate window into a person's life.

This poor drafting appears to be yet another example of lawmakers failing to understand the underlying technology used by cryptocurrencies. EFF has long advocated for Congress to protect consumers by focusing on malicious actors engaged in fraudulent practices within the cryptocurrency space. However, overbroad and technologically disconnected cryptocurrency regulation could do more harm than good. Blockchain projects should serve the interests and needs of users, and we hope to see a diverse and competitive ecosystem where values such as individual privacy, censorship-resistance, and interoperability are designed into blockchain projects from the ground up. Smart cryptocurrency regulation will foster this innovation and uphold consumer privacy, not surveil users while failing to do anything meaningful to combat fraud.

EFF has a few key concepts we’ve urged Congress to adopt when developing cryptocurrency regulation, specifically that any regulation:

  • Should be technologically neutral;
  • Should not apply to those who merely write and publish code;
  • Should provide protections for individual miners, merchants who accept cryptocurrencies, and individuals who trade in cryptocurrency as consumers;
  • Should focus on custodial services that hold and trade assets on behalf of users;
  • Should provide an adequate on-ramp for new services to comply;
  • Should recognize the human right to privacy;
  • Should recognize the important role of decentralized technologies in empowering consumers;
  • Should not chill future innovation that will benefit consumers.

The poorly drafted provision in Biden’s infrastructure bill fails our criteria across the board.

The Senate should act swiftly to modify or remove this dangerous provision. Getting cryptocurrency regulation right means ensuring an opportunity for public engagement and nuance—and the breakneck timeline of the infrastructure bill leaves no chance for either.

DHS’s Flawed Plan for Mobile Driver’s Licenses

EFF - Thu, 07/29/2021 - 6:39pm

Digital identification can invade our privacy and aggravate existing social inequities. Designed wrong, it might be a big step towards national identification, in which every time we walk through a door or buy coffee, a record of the event is collected and aggregated. Also, any system that privileges digital identification over traditional forms will disadvantage people already at society’s margins.

So, we’re troubled by proposed rules on “mobile driver’s licenses” (or “mDLs”) from the U.S. Department of Homeland Security. And we’ve joined with the ACLU and EPIC to file comments that raise privacy and equity concerns about these rules. The stakes are high, as the comments explain:

By making it more convenient to show ID and thus easier to ask for it, digital IDs would inevitably make demands for ID more frequent in American life. They may also lead to the routine use of automated or “robot” ID checks carried out not by humans but by machines, causing such demands to proliferate even more. Depending on how a digital ID is designed, it could also allow centralized tracking of all ID checks, and raise other privacy issues. And we would be likely to see demands for driver’s license checks become widespread online, which would enormously expand the tracking information such ID checks could create. In the worst case, this would make it nearly impossible to engage in online activities that aren’t tied to our verified, real-world identities, thus hampering the ability to engage in constitutionally protected anonymous speech and facilitating privacy-destroying persistent tracking of our activities and associations.

Longer-term, if digital IDs replace physical documents entirely, or if physical-only document holders are placed at a disadvantage, that could have significant implications for equity and fairness in American life. Many people do not have smartphones, including many from our most vulnerable communities. Studies have found that 15 percent of the population does not own a smartphone, including almost 40 percent of people over 65 and 24 percent of people who make less than $30,000 a year.

Finally, we are concerned that the DHS proposal layers REAL ID with mDL. REAL ID has many privacy problems, which should not be carried over into mDLs. Moreover, if a person had an mDL issued by a state DMV, that would address forgery and cloning concerns, without the need for REAL ID and its privacy problems.

Texas AG Paxton's Retaliatory Investigation of Twitter Goes to Ninth Circuit

EFF - Thu, 07/29/2021 - 5:21pm

Governments around the world are pressuring websites and social media platforms to publish or censor particular speech with investigations, subpoenas, raids, and bans. Burdensome investigations are enough to coerce targets of this retaliation into following a government’s editorial line. In the US, longstanding First Amendment law recognizes and protects people from this chilling effect. In an amicus brief filed July 23, EFF, along with the Center for Democracy and Technology and other partner organizations, urged the Ninth Circuit Court of Appeals to apply this law and protect Twitter from a retaliatory investigation by Texas Attorney General Ken Paxton.

After Twitter banned then-President Trump following the January 6 riots at the U.S. Capitol, Paxton issued a Civil Investigative Demand (CID) to Twitter (and other major online platforms) for, among other things, any documents relating to its terms of use and content moderation practices. The CID alleged "possible violations" of Texas's deceptive practices law.

The district court allowed Paxton's investigation to proceed because, even if it was retaliatory, Paxton would have to go to court to enforce the CID: Twitter had to let the investigation play out before it could sue. But as our brief says, courts have recognized that “even pre-enforcement, threatened punishment of speech has a chilling effect.” You don't have to wait to go to court when your free expression is inhibited.

Access to online platforms with different rules and environments generally benefits users, though the brief also points out that EFF and partner organizations have criticized platforms for removing benign posts, censoring human rights activists and journalists, and other bad content moderation practices. We have to address those mistakes, but not through chilling government investigations.

The Bipartisan Broadband Bill: Good, But It Won’t End the Digital Divide

EFF - Thu, 07/29/2021 - 3:53pm

The U.S. Senate is on the cusp of approving an infrastructure package, which passed a critical first vote last night by 67-32. Negotiations on the final bill are ongoing, but late yesterday NBC News had the draft broadband provisions. There is a lot to like in it, some of which will depend on decisions by the state governments and the Federal Communications Commission (FCC), and some drawbacks. Assuming that what was released makes it into the final bill, here is what to expect.

Not Enough Money to Close the Digital Divide Across the U.S.

We have long advocated for, backed up by evidence, a plan that would connect every American to fiber. It is a vital part of any nationwide communications policy that intends to actually function in the 21st century. The future is clearly heading towards more symmetrical uses, that will require more bandwidth at very low latency. Falling short of that will inevitably create a new digital divide, this one between those with 21st-century access and those without. Fiber-connected people will head towards the cheaper symmetrical multi-gigabit era while others are stuck on capacity-constrained expensive legacy wires. This “speed chasm” will create a divide between those who can participate in an increasingly remote, telecommuting world and those who cannot.

Most estimates put the price tag of universal fiber at $80 to $100 billion, but this bipartisan package proposes only $40 billion in total for construction. It’s pretty obvious that this shortfall will prevent many areas from the funding they need to deliver fiber--or really any broadband access—to the millions of Americans in need of access.

While Congress can rectify this shortfall in the future with additional infusions of funding, as well as a stronger emphasis on treating fiber as an infrastructure, versus purely a broadband service. But it should be clear what it means to not do so now. Some states will do very well under this proposal, by having the federal efforts complement already existing state efforts. For example, California already has a state universal fiber effort underway that recruits all local actors to work with the state to deliver fiber infrastructure. More federal dollars will just augment an already very good thing there. But other states may, unfortunately, get duped into building out or subsidizing slow networks that will inevitably need to be replaced. That will cost the state and federal government more money in the end. This isn’t fated to happen, but it’s a risk invited by the legislation’s adoption of 100/20 Mbps as the build-out metric instead of 100/100 Mbps.

Protecting the Cable Monopolies Instead of Giving Us What We Need

Lobbyists for the slow legacy internet access companies descended on Capitol Hill with a range of arguments trying to dissuade Congress from creating competition in neglected markets, which in turn would force existing carriers to provide better service. Everyone will eventually need access to fiber-optic infrastructure. Our technical analysis has made clear that fiber is the superior medium for 21st-century broadband, which is why government infrastructure policy needs to be oriented around pushing fiber into every community.

Even major wireless industry players agree now that fiber is “inextricably linked” with future high-speed wireless connectivity. But all of this was very inconvenient for existing legacy monopolies. Most noteworthy, cable stood to lose if too many people got very fast cheaper internet from someone else. The legislation includes provisions to effectively insulate the underinvested cable monopoly markets from federal dollars. That, arguably, is the worst outcome here.

By defining internet access as the ability to get 100/20 Mbps service, the draft language allows cable monopolies to argue that anyone with access to ancient, insufficient internet access does not need federal money to build new infrastructure. That means communities with nearly decade-old DOCSIS 3.0 broadband are shielded from federal dollars from being used to build fiber. Copper-DSL-only areas, and areas entirely without broadband, will likely take the lion’s share of the $40 billion made available. In addition to rural areas, pockets of urban markets where people are still lacking broadband will qualify. This will lead to an absurd result: people on inferior, too-expensive cable services will be seen as equally served as their neighbors who will get federally funded fiber.

The Future-Proofing Criteria Is Essential to Help Avoid Wasting These Investments

The proposal establishes a priority (not a mandate) for future-proof infrastructure, which is essential to avoid the 100/20 Mbps speed, or something close to it, from becoming standard. Legacy industry was fond of telling Congress to be “technology neutral” in its policy, when really they were asking Congress to create a program that subsidized their obsolete connections by lowering the bar. The future-proofing provision helps avoid that outcome though by establishing federal priorities of the broadband projects being funded (see below).

This is where things will be challenging in the years to come. The Biden Administration has been crystal clear about the link between fiber infrastructure and future-proofing per its Treasury guidelines that implemented the broadband provisions of the American Rescue Plan. But the bipartisan bill gives a lot of discretion to the states to distribute the funds. Without a doubt, the same lobby that descended on Congress to argue against 100/100 Mbps will attempt to grift state governments into believing any infrastructure will deliver these goals. That is just not true as a matter of physics.  States that understand this will push fiber, and are given the flexibility to do so here.

Digital Discrimination Rules

Under the section titled “digital discrimination,” the bill requires the FCC to establish what it means to have equal access to broadband and, more importantly, what a carrier would have to do to violate such a requirement. This provision carries major possibilities but is dependent on who the president nominates to run the FCC, as it will be their responsibility for setting the rules. If done right, it can set the stage for addressing digital redlining in certain urban communities, and push fiber on equitable terms.

If they get the right regulation, the most direct beneficiaries are likely to be city broadband users who have been left behind. Even in big cities with profitable markets, people have been left behind. For example, San Francisco has approximately 100,000 people per the city’s own internal analysis that lack broadband (most of whom are low-income and predominantly people of color), yet are surrounded by Comcast and AT&T fiber deployments in that same city. The same is true in various other major cities per numerous studies, which is why EFF has called for a ban on digital redlining both at the state and federal levels. 

It’s Time for Police to Stop Using ShotSpotter

EFF - Thu, 07/29/2021 - 1:21pm

Court documents recently reviewed by VICE have revealed that ShotSpotter, a company that makes and sells audio gunshot detection to cities and police departments, may not be as accurate or reliable as the company claims. In fact, the documents reveal that employees at ShotSpotter may be altering alerts generated by the technology in order to justify arrests and buttress prosecutors’ cases. For many reasons, including the concerns raised by these recent reports, police must stop using technologies like ShotSpotter.

Acoustic gunshot detection relies on a series of sensors, often placed on lamp posts or buildings. If a gunshot is fired, the sensors detect the specific acoustic signature of a gunshot and send the time and location to the police. Location is gauged by measuring the amount of time it takes for the sound to reach sensors in different locations.

According to ShotSpotter, the largest vendor of acoustic gunshot detection technology, this information is then verified by human acoustic experts to confirm the sound is gunfire, and not a car backfire, firecracker, or other sounds that could be mistaken for gunshots. The sensors themselves can only determine whether there is a loud noise that somewhat resembles a gunshot. It’s still up to people listening on headphones to say whether or not shots were fired.

In a recent statement, ShotSpotter denied the VICE report and claimed that the technology is “100% reliable.” Absolute claims like these are always dubious. And according to the testimony of a ShotSpotter employee and expert witness in court documents reviewed by VICE, claims about the accuracy of the classification come from the marketing department of the company—not from engineers.

Moreover, ShotSpotter presents a real and disturbing threat to people who live in cities covered in these AI-augmented listening devices—which all too often are over-deployed in majority Black and Latine neighborhoods. A recent study of Chicago showed how, over the span of 21 months, ShotSpotter sent police to dead-end reports of shots fired over 40,000 times. This shows—again—that the technology is not as accurate as the company’s marketing department claims. It also means that police officers routinely are deployed to neighborhoods expecting to encounter an armed shooter, and instead encounter innocent pedestrians and neighborhood residents. This creates a real risk that police officers will interpret anyone they encounter near the projected site of the loud noises as a threat—a scenario that could easily result in civilian casualties, especially in over-policed communities.

In addition to its history of false positives, the danger it poses to pedestrians and residents, and the company's dubious record of altering data at the behest of police departments, there is also a civil liberties concern posed by the fact that these microphones intended to detect gunshots can also record human voices.

Yet people in public places—for example, having a quiet conversation on a deserted street—are often entitled to a reasonable expectation of privacy, without overhead microphones unexpectedly recording their conversations. Federal and state eavesdropping statutes (sometimes called wiretapping or interception laws) typically prohibit the recording of private conversations absent consent from at least one person in that conversation.

In at least two criminal trials, prosecutors sought to introduce as evidence audio of voices recorded on acoustic gunshot detection systems. In the California case People v. Johnson, the court admitted it into evidence. In the Massachusetts case Commonwealth v. Denison,  the court did not, ruling that a recording of “oral communication” is prohibited “interception” under the Massachusetts Wiretap Act.

It’s only a matter of time before police and prosecutors’ reliance on ShotSpotter leads to tragic consequences. It’s time for cities to stop using ShotSpotter.

Disentangling Disinformation: Not as Easy as it Looks

EFF - Wed, 07/28/2021 - 6:30pm

Body bags claiming that “disinformation kills” line the streets today in front of Facebook’s Washington, D.C. headquarters. A group of protesters, affiliated with “The Real Facebook Oversight Board” (an organization that is, confusingly, not affiliated with Facebook or its Oversight Board), is urging Facebook’s shareholders to ban so-called misinformation “superspreaders”—that is, a specific number of accounts that have been deemed responsible for the majority of disinformation about the COVID-19 vaccines.

Disinformation about the vaccines is certainly contributing to their slow uptake in various parts of the U.S. as well as other countries. This disinformation is spreading through a variety of ways: Local communities, family WhatsApp groups, FOX television hosts, and yes, Facebook. The activists pushing for Facebook to remove these “superspreaders” are not wrong: while Facebook does currently ban some COVID-19 mis- and disinformation, urging the company to enforce its own rules more evenly is a tried-and-true tactic.

But while disinformation “superspreaders” are easy to identify based on the sheer amount of information they disseminate, tackling disinformation at a systemic level is not an easy task, and some of the policy proposals we’re seeing have us concerned. Here’s why.

1. Disinformation is not always simple to identify.

In the United States, it was only a few decades ago that the medical community deemed homosexuality a mental illness. It took serious activism and societal debate for the medical community to come to an understanding that it was not. Had Facebook been around—and had we allowed it to be arbiter of truth—that debate might not have flourished.

Here’s a more recent example: There is much debate amongst the contemporary medical community as to the causes of ME/CFS, a chronic illness for which a definitive cause has not been determined—and which, just a few years ago, was thought by many not to be real. The Centers for Disease Control notes this and acknowledges that some healthcare providers may not take the illness seriously. Many sufferers of ME/CFS use platforms like Facebook and Twitter to discuss their illness and find community. If those platforms were to crack down on that discussion, relying on the views of the providers that deny the gravity of the illness, those who suffer from it would suffer more greatly.

2. Tasking an authority with determining disinfo has serious downsides.

As we’ve seen from the first example, there isn’t always agreement between authorities and society as to what is truthful—nor are authorities inherently correct.

In January, German newspaper Handelsblatt published a report stating that the Oxford-AstraZeneca vaccine was not efficacious for older adults, citing an anonymous government source and claiming that the German government’s vaccination scheme was risky.

AstraZeneca denied the claims, and no evidence that the vaccine was ineffective for older adults was procured, but it didn’t matter: Handelsblatt’s reporting set off a series of events that led to AstraZeneca’s reputation in Germany suffering considerably. 

Finally, it’s worth pointing out that even the CDC itself—the authority tasked with providing information about COVID-19—has gotten a few things wrong, most recently in May when it lifted its recommendation that people wear masks indoors, an event that was followed by a surge in COVID-19 cases. That shift was met with rigorous debate on social media, including from epidemiologists and sociologists—debate that was important for many individuals seeking to understand what was best for their health. Had Facebook relied on the CDC to guide its misinformation policy, that debate may well have been stifled.

3. Enforcing rules around disinformation is not an easy task.

We know that enforcing terms of service and community standards is a difficult task even for the most resourced, even for those with the best of intentions—like, say, a well-respected, well-funded German newspaper. But if a newspaper, with layers of editors, doesn’t always get it right, how can content moderators—who by all accounts are low-wage workers who must moderate a certain amount of content per hour—be expected to do so? And more to the point, how can we expect automated technologies—which already make a staggering amount of errors in moderation—to get it right?

The fact is, moderation is hard at any level and impossible at scale. Certainly, companies could do better when it comes to repeat offenders like the disinformation “superspreaders,” but the majority of content, spread across hundreds of languages and jurisdictions, will be much more difficult to moderate—and as with nearly every category of expression, plenty of good content will get caught in the net.

Should Congress Close the FBI’s Backdoor for Spying on American Communications? Yes.

EFF - Wed, 07/28/2021 - 3:25pm

All of us deserve basic protection against government searches and seizures that the Constitution provides, including requiring law enforcement to get a warrant before it can access our communications.  But currently, the FBI has a backdoor into our communications, a loophole, that Congress can and should close.

This week, Congress will vote on the Commerce, Justice, Science and Related Agencies Appropriations bill (H.R. 4505). Among many other things, this bill contains all the funding for the Department of Justice for Fiscal Year 2022 along with certain restrictions on how the DOJ is allowed to spend taxpayer funds. Reps. Lofgren, Massie, Jayapal, and Davidson have offered an amendment to the bill that would prohibit the use of taxpayer funds to conduct warrantless wiretapping of US Persons conducted under Section 702 of the FISA Amendments Act. We strongly support this Amendment.

Section 702 of the Foreign Intelligence Surveillance Act (FISA) requires tech and telecommunications companies to provide the U.S. government with access to emails and other communications to aid in national security investigations--ostensibly when U.S. persons are in communication with foreign surveillance targets abroad or wholly foreign communications transit the U.S. But in this wide-sweeping dragnet approach to intelligence collection, companies allows government access and collection of a large amount of “incidental” communications--that is millions of untargeted communications of U.S. persons that are swept up with the intended data. Once it is collected, the FBI currently can bypass the 4th Amendment requirement of a warrant and sift through these “incidental” non-targeted communications of Americans -- effectively using Section 702 as a “backdoor” around the constitution. They’ve been told by the FISA Court this violates Americans’ Fourth Amendment rights but it has not seemed to stop them and, frustratingly, the FISA Court has failed to take steps to ensure that they stop.

This amendment would not only forbid the DOJ from doing this activity, it would also send a powerful signal to the intelligence agency that Congress is serious about reform.

Take action

Tell your member of Congress to support this amendment today.

The DOJ is opposing this amendment, saying that it would inhibit their investigations and make them less successful in rooting out kidnappings and child trafficking. We’ve heard this argument before, and it’s just not convincing.

The FBI has a wide range of investigatory tools.  It gives a scary list of potential investigations that it says might be impacted by removing its backdoor, but for every single one of them, the FBI can get a warrant or use other investigatory tools like National Security Letters. What the DOJ elides in protesting this narrow amendment is that the FBI has gotten used to searching through already collected communications of Americans —overbroadly collected for foreign intelligence purposes — for domestic law enforcement purposes. But it is not the purpose of 702 to save the FBI the trouble of getting a warrant (FISA or otherwise) for domestic investigations as the law and the Constitution requires before it collects needed information from the telecommunications and Internet service providers. The FBI is in no way prohibited from using its long-standing powerful investigatory tools due to this amendment - it just can no longer piggy-back on admittedly over broad foreign intelligence collections.  

The government also elides that what it wants is to take advantage of Section 702’s massive well-documented over-collection to have a kind of time machine. There is a possibility that information collected by the NSA will be deleted before the FBI can get a warrant, but the FBI has not submitted any public (or as far as we can tell, classified) evidence that this is a major problem in practice or would have resulted in thwarted prosecutions -- as opposed to just requiring a bit more effort by the FBI. But protecting Americans privacy is worth making the FBI follow the Constitution, even if it is a bit more effort.

The US Supreme Court has denied domestic law enforcement a general warrant — collecting first a broad swath of Americans’ communications then sorting through later what it may need. That is what the FBI is defending here, it is what the FISC raised concerns about and it is what this amendment will rightfully stop.

Tell your member of Congress to support this amendment today.

Take action

Tell your member of Congress to support this amendment today.

EFF at 30: Freeing the Internet, with Net Neutrality Pioneer Gigi Sohn

EFF - Wed, 07/28/2021 - 1:46pm

To commemorate the Electronic Frontier Foundation’s 30th anniversary, we present EFF30 Fireside Chats. This limited series of livestreamed conversations looks back at some of the biggest issues in internet history and their effects on the modern web.

To celebrate 30 years of defending online freedom, EFF held a candid live discussion with net neutrality pioneer and EFF board member Gigi Sohn, who served as Counselor to the Chairman of the Federal Communications Commission and co-founder of leading advocacy organization Public Knowledge. Joining the chat were Senior Legislative Counsel at EFF Ernesto Falcon and Associate Director of Policy and Activism Katharine Trendacosta. You can watch the full conversation here.

In my perfect world, everyone’s connected to a future proof, fast, affordable—and open—internet.

On July 28, we’ll be holding our final EFF30 Fireside Chat—a "Founders Edition." EFF's Executive Director, Cindy Cohn will be joined by some of our founders and early board members, Esther Dyson, Mitch Kapor, and John Gilmore, to discuss everything from EFF's origin story and its role in digital rights to where we are today.

EFF30 Fireside Chat: Founders Edition
Wednesday, July 28, 2021 at 5 pm Pacific Time
Streaming Discussion with Q&A

RSVP to the Next EFF30 Fireside Chat


The conversation began with a comparison between the policy battles of the 1990s, the 2000’s, and today: “What was happening was that the copyright industry--Hollywood, the recording industry, the book publishers--, saw this technology that gave people power to control their own experience and what they wanted to see, and what they wanted to listen to, and it flipped them out...we really need[ed] an organization in Washington that’s dedicated to a free and open internet, that’s free of copyright gatekeepers, and free of ISP gatekeepers.” This was the founding of Public Knowledge, an organization that fights, alongside EFF, to protect the open internet.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FASCg1x5al_o%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Many think of net neutrality—the idea that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improper discrimination in favor of particular apps, sites or services—as a fairly recent issue. But it actually started in the late 1990’s, Sohn explained. The battle, in many ways, began in earnest in Portland, when the city’s consumer protection agency told AT&T that their cable modem service was going to be regulated under Title VI of the Communications Act of 1934. This led to a court case where the Ninth Circuit determined that the cable modem service was actually a communications service, and fell under Title II of the Communications Act, and should be regulated similarly to a telephone service. Watch the full clip below for a full deep dive into net neutrality’s history:

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2F9I8CJbn6KVQ%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Moving to the topic of broadband access, Katharine Trendacosta describes it along the lines of net neutrality. “It’s not a partisan issue. Most Americans support net neutrality. Most Americans need internet access.” And the increased need for access during the pandemic wasn’t a blip--”This is always what the future was going to look like.”

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FAeph3-kemwo%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

But crises like the pandemic do show the dangerous cracks that exist due to the current lack of broadband regulation. For example, Sohn explained, the Santa Clara fire department was throttled during the Mendocino Complex fire, and had nowhere to go to fix the problem. And over the last year, “the former FCC chairman had to beg the companies not to cut off people’s service during the pandemic. The FCC couldn’t say ‘you must,’ they had to say ‘Mother, may I?’” To put it bluntly, said Ernesto Falcon, as access is more critical than ever, the lack of authority leaves many people without recourse: “Three-quarters of Americans now think of broadband as the same as electricity and water in terms of its importance in everyday life--and the idea that you would have an unregulated monopoly selling you water, who wants that? No one wants that.”

In the regulatory vacuum, Sohn said, the states are the new battleground for getting net neutrality and broadband access to everyone--and they are well poised to fight that fight. Several states have passed net neutrality laws, including California (the ISPs, of course, are fighting back with lawsuits). And though the federal government has failed to properly expand broadband access, states can do, and some have done, much better:

The FCC and other agencies have spent about 50 billion dollars trying to build broadband everywhere and they’ve failed miserably. They invested in slow technologies, they weren’t careful with where they built, we have slow networks, and by one count we have 42 million Americans that don’t have access to any network at all. We need to be much much smarter. It’s not only about who gets the money, or how much, or for what, but it’s also how it’s given out. And that’s one of the reasons why I’m favorable towards giving a big chunk to the states. They’ll have a better idea of where the need is.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FEUTcpkNfF9w%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

This chat was recorded just weeks before California Governor Newsom signed a massive, welcome multi-billion dollar public fiber package into law in late July.

The conversation then went to questions from the audience, which tackled ways to kickstart competition in the ISP market, how to convince politicians to make an expensive fiber optic investment, and ultimately, what the role of government should be in an area in which they’ve (so far) failed. You can, and should, watch the entire Fireside Chat here. Whatever you take away from this wide-ranging discussion of open internet issues, we hope you’ll help us work towards Sohn’s vision of a world where “everyone’s connected to a future proof, fast and affordable—and open—internet.” This is a vision that EFF shares, and one that we believe can exist—if we fight for it.

Check out additional recaps of EFF's 30th anniversary conversation series, and don't miss our final program where we'll delve into the dawn of digital activism with EFF’s early leaders on July 28, 2021: EFF30 Fireside Chat: Founders Edition.

EFF Sues U.S. Postal Service For Records About Covert Social Media Spying Program

EFF - Tue, 07/27/2021 - 2:56pm
Service Looked Through People’s Posts Prior to Street Protests

Washington D.C.—The Electronic Frontier Foundation (EFF) filed a Freedom of Information Act (FOIA) lawsuit against the U.S. Postal Service and its inspection agency seeking records about a covert program to secretly comb through online posts of social media users before street protests, raising concerns about chilling the privacy and expressive activity of internet users.

Under an initiative called Internet Covert Operations Program, analysts at the U.S. Postal Inspection Service (USPIS), the Postal Service’s law enforcement arm, sorted through massive amounts of data created by social media users to surveil what they were saying and sharing, according to media reports. Internet users’ posts on Facebook, Twitter, Parler, and Telegraph were likely swept up in the surveillance program.

USPIS has not disclosed details about the program or any records responding to EFF’s FOIA request asking for information about the creation and operation of the surveillance initiative. In addition to those records, EFF is also seeking records on the program’s policies and analysis of the information collected, and communications with other federal agencies, including the Department of Homeland Security (DHS), about the use of social media content gathered under the program.

“We’re filing this FOIA lawsuit to shine a light on why and how the Postal Service is monitoring online speech. This lawsuit aims to protect the right to protest,” said Houston Davidson, EFF public interest legal fellow. “The government has never explained the legal justifications for this surveillance. We’re asking a court to order the USPIS to disclose details about this speech-monitoring program, which threatens constitutional guarantees of free expression and privacy.”

Media reports revealed that a government bulletin dated March 16 was distributed across DHS’s state-run security threat centers, alerting law enforcement agencies that USPIS analysts monitored “significant activity regarding planned protests occurring internationally and domestically on March 20, 2021.” Protests around the country were planned for that day, and locations and times were being shared on Parler, Telegram, Twitter, and Facebook, the bulletin said.

“Monitoring and gathering people’s social media activity chills and suppresses free expression,” said Aaron Mackey, EFF senior staff attorney. “People self-censor when they think their speech is being monitored and could be used to target them. A government effort to scour people’s social media accounts is a threat to our civil liberties.”

For the complaint:

For more on this case:

For more on social media surveillance:

Contact:  HoustonDavidsonLegal Fellowhouston@eff.org AaronMackeySenior Staff Attorneyamackey@eff.org

EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data

EFF - Fri, 07/23/2021 - 5:48pm
Lower Court Improperly Dismissed Lawsuit Against Privacy Invasive Data Collection Practice

San Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.

The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.

A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require.

“Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”

The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.

“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.”

The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.

“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.”

For the brief:

Contact:  JenniferLynchSurveillance Litigation Directorjlynch@eff.org

Data Brokers are the Problem

EFF - Fri, 07/23/2021 - 3:59pm

Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm.

This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 "Out of Control" report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal.

But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent.

Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents.

Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale.

All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters.

That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.

Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty

EFF - Thu, 07/22/2021 - 10:06pm

As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers. 

But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE's Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.”

The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (​​Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here). 

Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated. 

Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature. 

Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021).

With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions. 

More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation. 

The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPICEDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first. 

Venmo Takes Another Step Toward Privacy

EFF - Wed, 07/21/2021 - 5:12pm

As part of a larger redesign, the payment app Venmo has discontinued its public “global” feed. That means the Venmo app will no longer show you strangers’ transactions—or show strangers your transactions—all in one place. This is a big step in the right direction. But, as the redesigned app rolls out to users over the next few weeks, it’s unclear what Venmo’s defaults will be going forward. If Venmo and parent company PayPal are taking privacy seriously, the app should make privacy the default, not just an option still buried in the settings.

Currently, all transactions and friends lists on Venmo are public by default, painting a detailed picture of who you live with, where you like to hang out, who you date, and where you do business. It doesn’t take much imagination to come up with all the ways this could cause harm to real users, and the gallery of Venmo privacy horrors is well-documented at this point.

However, Venmo apparently has no plans to make transactions private by default at this point. That would squander the opportunity it has right now to finally be responsive to the concerns of Venmo users, journalists, and advocates like EFF and Mozilla. We hope Venmo reconsiders.

There’s nothing “social” about sharing your credit card statement with your friends.

Even a seemingly positive move from “public” to “friends-only” defaults would maintain much of Venmo’s privacy-invasive status quo. That’s in large part because of Venmo’s track record of aggressively hoovering up users’ phone contacts and Facebook friends to populate their Venmo friends lists. Venmo’s installation process nudges users towards connecting their phone contacts and Facebook friends to Venmo. From there, the auto-syncing can continue silently and persistently, stuffing your Venmo friends list with people you did not affirmatively choose to connect with on the app. In some cases, there is no option to turn this auto-syncing off.  There’s nothing “social” about sharing your credit card statement with a random subset of your phone contacts and Facebook friends, and Venmo should not make that kind of disclosure the default.

It’s also unclear if Venmo will continue to offer a “public” setting now that the global feed is gone. Public settings would still expose users’ activities on their individual profile pages and on Venmo’s public API, leaving them vulnerable to the kind of targeted snooping that Venmo has become infamous for.

We were pleased to see Venmo recently take the positive step of giving users settings to hide their friends lists. Throwing out the creepy global feed is another positive step. Venmo still has time to make transactions and friends lists private by default, and we hope it makes the right choice.

If you haven’t already, change your transaction and friends list settings to private by following the steps in this post.

Cheers to the Winners of EFF's 13th Annual Cyberlaw Trivia Night

EFF - Wed, 07/21/2021 - 3:00am

On June 17th, the best legal minds in the Bay Area gathered together for a night filled with tech law trivia—but there was a twist! With in-person events still on the horizon, EFF's 13th Annual Cyberlaw Trivia Night moved to a new browser-based virtual space, custom built in Gather. This 2D environment allowed guests to interact with other participants using video, audio, and text chat, based on proximity in the room.

EFF's staff joined forces to craft the questions, pulling details from the rich canon of privacy, free speech, and intellectual property law to create four rounds of trivia for this year's seven competing teams.

Our virtual Gathertown room!

As the evening began, contestants explored the virtual space and caught-up with each-other, but the time for trivia would soon be at hand! After welcoming everyone to the event, our intrepid Quiz Master Kurt Opsahl introduced our judges Cindy Cohn, Sophia Cope, and Mukund Rathi. Attendees were then asked to meet at their team's private table, allowing them to freely discuss answers without other teams being able to overhear, and so the trivia began!

Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters.

Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters. For the Intellectual Property Round 2, the questions proved more challenging, but the teams quickly rallied for the Privacy & Free Speech Round 3. With no clear winners so far, teams entered the final 4th round hoping to break away from the pack and secure 1st place.

But a clean win was not to be!

Durie Tangri's team "The Wrath of (Lina) Khan" and Fenwick's team "The NFTs: Notorious Fenwick Trivia" were still tied for first! Always prepared for such an occurrence, the teams headed into a bonus Tie-Breaker round to settle the score. Or so we thought...

After extensive deliberation, the judges arrived at their decision and announced "The Wrath of (Lina) Khan" had the closest to correct answer and were the 1st place winners, with the "The NFTs: Notorious Fenwick Trivia" coming in 2nd, and Ridder, Costa & Johnstone's team "We Invented Email" coming in 3rd. Easy, right? No!

Fenwick appealed to the judges, arguing that under Official "Price is Right" Rules, that the answer closest to correct without going over should receive the tie-breaker point: cue more extensive deliberation (lawyers). Turns out...they had a pretty good point. Motion for Reconsideration: Granted! 

But what to do when the winners had already been announced?

Two first place winners, of course! Which also meant that Ridder, Costa & Johnstone's team "We Invented Email" moved into the 2nd place spot, and Facebook's team "Whatsapp" were the new 3rd place winners! Whew!  Big congratulations to both winners, enjoy your bragging rights!

EFF's legal interns also joined in the fun, and their team name "EFF the Bluebook" followed the proud tradition of having an amazing team name, despite The Rules stating they were unable to formally compete.

The coveted Cyberlaw Quiz Cups that are actually glass beer steins...

EFF hosts the Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for their users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Durie Tangri LLP; Fenwick; Ridder, Costa & Johnstone LLP; and Wilson Sonsini Goodrich & Rosati LLP for sponsoring this year’s Bay Area event.

If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Interested lawyers reading this post can go here to join the Cooperating Attorneys list.

Are you interested in attending or sponsoring an upcoming Trivia Night? Please email hannah.diaz@eff.org for more information.

India’s Draconian Rules for Internet Platforms Threaten User Privacy and Undermine Encryption

EFF - Tue, 07/20/2021 - 5:14pm

The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”) pose huge problems for free expression and Internet users’ privacy. They include dangerous requirements for platforms to identify the origins of messages and pre-screen content, which fundamentally breaks strong encryption for messaging tools. Though WhatsApp and others are challenging the rules in court, the 2021 Rules have already gone into effect.

Three UN Special Rapporteurs—the Rapporteurs for Freedom of Expression, Privacy, and Association—heard and in large part affirmed civil society’s criticism of the 2021 Rules, acknowledging that they did “not conform with international human rights norms.” Indeed, the Rapporteurs raised serious concerns that Rule 4 of the guidelines may compromise the right to privacy of every internet user, and called on the Indian government to carry out a detailed review of the Rules and to consult with all relevant stakeholders, including NGOs specializing in privacy and freedom of expression.

2021 Rules contain two provisions that are particularly pernicious: the Rule 4(4) Content Filtering Mandate and the Rule 4(2) Traceability Mandate.

Content Filtering Mandate

Rule 4(4) compels content filtering, requiring that providers are able to review the content of communications, which not only fundamentally breaks end-to-end encryption, but creates a system for censorship. Significant social media intermediaries (i.e. Facebook, WhatsApp, Twitter, etc.) must “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. This cannot be done without breaking the higher-level promises of secure end-to-end encrypted messaging. 

Client-side scanning has been proposed as a way to enforce content blocking without technically breaking end-to-end encryption. That is, the user’s own device could use its knowledge of the unencrypted content to enforce restrictions by refusing to transmit, or perhaps to display, certain prohibited information, without revealing to the service provider who was attempting to communicate or view that information. That’s wrong. Client side-scanning requires a robot-spy in the room. A spy in a place where people are talking privately makes it not a private conversation. If that spy is a robot-spy like with client-side scanning, it is still a spy just as much as if it were a human spy.

As we explained last year, client-side scanning inherently breaks the higher-level promises of secure end-to-end encrypted communications. If the provider controls what's in the set of banned materials, they can test against individual statements, so a test against a set of size 1, in practice, is the same as being able to decrypt a message. And with client-side scanning, there's no way for users, researchers, or civil society to audit the contents of the banned materials list.

The Indian government frames the mandate as directed toward terrorism, obscenity, and the scourge of child sexual abuse material, but the mandate is acutally much broader. It also imposes proactive and automatic enforcement of the 2021 Rule’s Section (3)1(d)’s content takedown provisions requiring the proactive blocking of material previously held to be “information which is prohibited under any law,” including specifically laws for the protection of “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation,” and incitement to any such act. This includes the widely criticized Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media.

This broad mandate is all that is necessary to automatically suppress dissent, protest, and political activity that a government does not like, before it can even be transmitted. The Indian government's response to the Rapporteurs dismisses this concern, writing “India's democratic credentials are well recognized. The right to freedom of speech and expression is guaranteed under the Indian Constitution.”

The response misses the point. Even if a democratic state applies this incredible power to preemptively suppress expression only rarely and within the bounds of internationally recognized rights to freedom of expression, Rule(4)4 puts in place the tool kit for an authoritarian crackdown, automatically enforced not only in public discourse, but even in private messages between two people.

Part of a commitment to human rights in a democracy requires civic hygiene, refusing to create the tools of undemocratic power.

Moreover, rules like these give comfort and credence to authoritarian efforts to enlist intermediaries to assist in their crackdowns. If this Rule were available to China, word for word, it could be used to require social media companies to block images of Winnie the Pooh as it happened in China from being transmitted, even in direct “encrypted” messages. 

Automated filters also violate due process, reversing the burden of censorship. As the three UN Special Rapporteurs made clear, a “general monitoring obligation that will lead to monitoring and filtering of user-generated content at the point of upload ... would enable the blocking of content without any form of due process even before it is published, reversing the well-established presumption that States, not individuals, bear the burden of justifying restrictions on freedom of expression.” 

Traceability Mandate

The traceability provision, in Rule 4(2), requires any large social media intermediary that provides messaging services to “enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. The Decryption Rules allow authorities to request the interception or monitoring of any decrypted information generated, transmitted, received, or stored in any computer resource..

The Indian government responded to the Rapporteur report, claiming to honor the right to privacy:

“The Government of India fully recognises and respects the right of privacy, as pronounced by the Supreme Court of India in K.S. Puttaswamy case. Privacy is the core element of an individual's existence and, in light of this, the new IT Rules seeks information only on a message that is already in circulation that resulted in an offence. 

This narrow view of Rule (4)4 is fundamentally mistaken. Implementing the Rule requires the messaging service to collect information about all messages, even before the content is deemed a problem, allowing the government to conduct surveillance with a time machine. This changes the security model and prevents implementing strong encryption that is a fundamental backstop to protecting human rights in the digital age.

The Danger to Encryption

Both the traceability and filtering mandates endanger encryption, calling for companies to know detailed information about each message that their encryption and security designs would otherwise allow users to keep private.  Strong end-to-end encryption means that only the sender and the intended recipient know the content of communications between them. Even if the provider only compares two encrypted messages to see if they match, without directly examining the content, this reduces security by allowing more opportunities to guess at the content.

It is no accident that the 2021 Rules are attacking encryption. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory, wrote that the rules were intentionally aimed at end-to-end encryption since the government would insist on software changes to defeat encryption protections:

Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said.

The 2021 Rules’ unstated requirement to break encryption goes beyond the mandate of the Information Technology (IT) Act, which authorized the 2021 Rules. India’s Centre for Internet & Society’s detailed legal and constitutional analysis of the Rules explains: “There is nothing in Section 79 of the IT Act to suggest that the legislature intended to empower the Government to mandate changes to the technical architecture of services, or undermine user privacy.” Both are required to comply with the Rules.

There are better solutions. For example, WhatsApp found a way to discourage massive chain forwarding of messages without itself knowing the content. It has the app note the number of times a message has been forwarded inside the message itself so that the app can then change its behavior based on this. Since the forwarding count is inside the encrypted message, the WhatsApp server and company don’t see it. So your app might not let you forward a chain letter, because the letter’s content shows it was massively forwarded, but the company can’t look at the encrypted message and know it's content.

Likewise, empowering users to report content can mitigate many of the harms that inspired the Indian 2021 Rules. The key principle of end-to-end encryption is that a message gets securely to its destination, without interception by eavesdroppers. This does not prevent the recipient from reporting abusive or unlawful messages, including now-decrypted content and the sender’s information. An intermediary may be able to facilitate user reporting, and still be able to provide the strong encryption necessary for a free society. Furthermore, there are cryptographic techniques for a user to report abuse in a way that identifies the abusive or unlawful content without the possibility of forging a complaint and preserving the privacy of those people not directly involved. 

The 2021 Rules endanger encryption, weakening the privacy and security of ordinary people throughout India, while creating tools which could all too easily be misused against fundamental human rights, and which can give inspiration for authoritarian regimes throughout the world.  The Rules should be withdrawn, reviewed and reconsidered, bringing the voices of civil society and advocates for international human rights, to ensure the Rules help protect and preserve fundamental rights in the digital age.