Electronic Freedom Foundation

End Two Federal Programs that Fund Police Surveillance Tech

EFF - Mon, 01/25/2021 - 6:39pm

The new administration can do two things immediately that would help stop some of the more nefarious ways that police departments get surveillance technology. It should further roll back the infamous 1033 program of the National Defense Authorization Act, which allows local police to inherit military gear. And it should bar the use of funds seized by civil asset forfeiture to fund these technologies. 

Police surveillance tech is a multi-billion dollar industry. Police departments find as many avenues to fund it as they can. This technology is not just deployed against suspects after acquiring a warrant—often police use it for dragnet surveillance of cars and people. Examples include automated license plate readers (ALPRs) that track cars as they move about the city (including to or from places of worship and doctors’ or attorneys’ offices), and drones or CCTV camera networks that police use to monitor protests. Departments have even bought police robots programmed to identify supposedly “suspicious” people, which is an invitation to further racial profiling.  

Beyond municipal budgets, there are numerous ways that police receive the funding that allows them to purchase drones and license plate readers, and build surveillance command centers known as real-time crime centers. Federal grants, wealthy individual benefactors, private foundations, and kickbacks from surveillance companies all contribute to police departments’ massive build-up of surveillance tech. Then police use it to track, listen to, and identify people going about their business. 

Two of these methods can immediately be addressed by the Biden administration. 

The 1033 Program

The infamous 1033 program provides police with excess military hardware like armored vehicles and weapons. Created in the 1990s, it has been the subject of a political tug-of-war in recent years. The Obama administration restricted the type of equipment that could be given to police departments, after the 2015 Black-led protests against police violence in Ferguson, Baltimore, and elsewhere. The Trump administration reversed course and ramped up the program shortly after taking office.

In January 2021, the military spending bill (which passed despite former-President Trump’s veto) reinstated some necessary guardrails. Congress banned grenades, bayonets, weaponized combat vehicles, and weaponized drones from passing hands from the military to local police departments.

We need further rules to prevent other equipment like surveillance drones from reaching our streets. 

After phasing out the Department of Defense’s 1033 program, which will help demilitarize local police, we hope the new Biden administration will re-think Department of Homeland Security grants to state and local police which fund surveillance technology, such as fusion centers, across the country. In addition to contributing to the militaristic and antagonistic posture of police agencies, fusion centers have recently been discovered (yet again) to be trafficking in dubious intelligence based on rumors, speculation, and disinformation

Equitable Sharing and Civil Asset Forfeiture

Civil asset forfeiture is a process that allows law enforcement to seize money and property from individuals suspected of being involved in a crime before they have been convicted, and sometimes before they’ve even been charged. When a local law enforcement agency partners with a federal agency it can apply for a share of the resources seized through a process called “equitable sharing.” Local law enforcement often spends these funds on surveillance technology, such as ALPRs, body-worn cameras, and child monitoring software

This shared money is commonly passed from the federal agency to the local law enforcement. For example, between July 2012 and June 2015, the city of San Jose, California, received $569,461 from the equitable sharing program. Likewise, in 2013, part of $95,000 in confiscated funds went to buy ALPRs for the city of Miami, Florida. In fact, ALPRs are a common purchase with these funds. 

Because the person whose property is being confiscated has not been found guilty of a crime, banning civil asset forfeiture has had bipartisan support in the past. At least three states, North Carolina, New Mexico, and Nebraska, have banned the practice, and there have been a number of federal bills to do the same.

Under President Obama, civil asset forfeiture skyrocketed. In 2015, he issued an executive order to create some guardrails for the program. Unfortunately, the Trump administration overturned them in 2017. 

Now, President Biden has the chance to use his executive power to shut off this dangerous forfeiture-to-surveillance pipeline. This kind of supposedly “free federal money” incentivizes local police departments to buy invasive surveillance technology they don’t actually need, and often it is generated by stealing cash from innocent people.

For Many, the Arab Spring Isn't Over

EFF - Mon, 01/25/2021 - 3:16pm

Ten years ago today, Egyptians took to the streets to topple a dictator who had clung to power for nearly three decades. January 25th remains one of the most important dates of the Arab Spring, a series of massive, civilian-led protests and uprisings that spread across the Middle East and North Africa a decade ago. Using social media and other digital technologies to spread the word and amplify their organizing, people across the region demanded an end to the corruption and authoritarian rule that had plagued their societies.

Despite setbacks, much of the work that was started in 2011 is still ongoing.

A decade later, the fallout from this upheaval has taken countries in different directions. While Tunisia immediately abolished its entrenched Internet censorship regime and took steps toward democracy, other countries in the region—such as Egypt, Saudi Arabia, and Bahrain—have implemented more and more tools for censorship and surveillance. From the use of Western-made spyware to target dissidents to collusion with US social media companies to censor journalism, the hope once expressed by tech firms has been overlaid with cynical amoral profiteering. 

As we consider the role that social media and online platforms have played in the U.S. in recent months, it’s both instructive and essential to remember the events that took place a decade ago, and how policies and decisions made at the time helped to strengthen (or, in some cases, handicap) those democratic movements. There are also worthwhile parallels to be drawn between calls in the U.S. for stronger anti-terrorism laws and the shortsighted cybercrime and counterterrorism laws­ passed by other countries after the upheaval. And as governments today wield new, dangerous technologies, such as face surveillance, to identify Black Lives Matter protestors as well as those responsible for the attempted insurrection at the U.S. Capitol, we must be reminded of the expansive surveillance regimes that developed in many Middle Eastern and North African countries over the last ten years as well. 

But most importantly, we must remember that a decade later, despite setbacks, much of the work that was started in 2011 is still ongoing

EFF’s Work In the Region

Just a few weeks prior to the January 25th protest in Egypt, a street vendor named Mohamed Bouazizi had set himself on fire in Tunisia in protest of the government’s corruption and brutality. Following his death, others in the country began to protest. These protests in Tunisia and Egypt inspired others throughout the region—and indeed, throughout the world—to rise up and fight for their rights, but they were often met with violent pushback. EFF focused heavily at the time on lifting up those voices who fought against censorship, for free expression, and for expanding digital rights. A detailed history of the issues EFF tracked at the time would be too lengthy to cover here, but a few instances stand out and help shine a light on what the region is experiencing now.

A Social Media Revolution?

Many governments have long seen social media as a potential threat. In the early part of the new millennium, countries across the globe were engaged in a wide variety of censorship and even wholesale restrictions to platforms, and even Internet access as a whole. As the use of social media accelerated, governments took action: Thailand blocked YouTube in 2006; Facebook was blocked in Syria in 2007; countries from Turkey to Tunisia to Iran followed suit. Within a few years, tech companies were inundated with governmental requests to block access for some users or to take down specific content. For the most part, they agreed, though some companies, like Twitter, didn’t do so until much later.  

In 2010, a picture of the body of Khaleed Saeed, who had been brutally murdered by Egyptian police, began to spread across Facebook. Pseudonymous organizers created a page to memorialize Saeed that quickly gathered followers and became the place at which the now-famous 25th of January protests were first called for.

But although Facebook was later happy to take credit for the role it played in the uprising, the company actually took the page down just two months prior because its administrators were violating the platform’s “real name” policy—only restoring it after allies stepped in to help.

Another emblematic incident from that era occurred when photo-sharing platform Flickr removed images of Mubarak’s security offices liberated by activists from Cairo’s state security offices in the days following the revolution. Journalist Hossam Hamalawy protested the company’s decision, which was allegedly made on the basis of a bogus copyright claim, prompting debate among civil society as to whether Flickr’s decision had been appropriate.

Overcoming Censorship, Fighting for Free Expression in Egypt

As governments in the region tried to stop the impact of the protests, and to minimize the spread of dissent on social media, critics were often jailed, including bloggers. But many of these critics were charged for merely expressing themselves online. EFF took particular note of the cases of Maikel Nabil Sanad and Ayman Youssef Mansour, the first and second bloggers in post-Mubarak Egypt to be sentenced to jail for their online expression. 

It's clear that the Arab Spring was a turning point for free expression online.

While the Mubarak regime had utilized emergency law to silence voices, the military shut up bloggers at whim. Sanad was sentenced, by a military court, to three years in prison for accusing the military of having conducted virginity tests on female protesters (a charge later found to be true). EFF highlighted his case numerous times, and reported on his eventual release, granted alongside 1,959 other prisoners as a marker of the first anniversary of the revolution. 

Mansour was tried by a civilian court and found to be "in contempt of religion," a crime under article 98(f) of the Penal Code. His crime was joking about Islam on Facebook.  

Today, EFF and other human rights organizations continue to advocate for the release of jailed Egyptian activists who have been arrested by the current regime in an attempt to silence their voices. Alaa Abd El Fattah‚ who has been arrested (and eventually released) by every Egyptian head of state, including during the revolution‚ and Amal Fathy, an activist for women’s rights, are just two of the prominent critics of the government whose use of the Internet has cost them their freedom. EFF’s “Offline” campaign showcases key cases across the globe of individuals who have been silenced, and who may not be receiving wide coverage, but that we believe speak to a wider audience concerned with online freedom.

Graffiti in Cairo depicting Alaa Abd El Fattah

A Tunisian Internet Agency Becomes A Hacker Space 

The lasting effects of the Arab Spring in Tunisia have been notable. Tunisians under the Zine El Abidine Ben Ali regime experienced curtailed digital rights, including the blocking of websites and the surveillance of citizens. Tunisian free expression advocates worked for years to raise awareness of the country’s pervasive Internet controls, and in 2011, amidst the “Jasmine Revolution,” Ben Ali promised to end the filtering in his final speech on January 13, before fleeing to Saudi Arabia. 

This created a dilemma for the Agence Tunisienne d'Internet (ATI) or Tunisian Internet Agency, which was caught in the middle of implementing censorship orders and arguing in support of free expression. EFF followed the battle and the online protests that ensued as Tunisians fought for freedom from censorship. The country’s highest court ruled against the filtering in 2012, and ATI’s headquarters, formerly a private home of Ben Ali and his regime’s censorship and surveillance technologies, became a hackerspace for Tunisians to innovate, #404labs. EFF was there to celebrate, and the site has since hosted events such as the Freedom Online Coalition and the Arab Bloggers Meeting.

Moez Chakchouk, current Transport Minister of Tunisia and former CEO of the Tunisian Internet Agency

Syria and Tunisia Spy On Their Own

As protests spread throughout Syria in 2011, bloggers and programmers were often the targets of threats, attacks, and detention by the Bashar al-Assad regime. While this harassment was public, a secret danger also lurked in cyberspace: the Syrian government, after having blocked various sites such as Facebook for years, was covertly surveilling online activity and communications throughout the country via both malware and “Man-in-the-middle” attacks on Facebook. The extent of the surveillance was not well known before the Arab Spring. 

In 2011, EFF helped raise the alarm about the malware. Later, we were among the first to identify the “Man-in-the-middle” attacks, as well as phishing attacks against activists in the area. 

Around this same time in Tunisia, reports of hacked Facebook pages tipped off that company’s security team that the nation’s ISPs were essentially recording and stealing the entire country’s worth of Facebook passwords from anyone who logged into the site. Presumably, this information was then being fed to the Ben Ali regime in an effort to remove protest pages from the site. The company’s response, to implement the encrypted HTTPS protocol for the country, helped spur it to do so elsewhere later. Awareness of the detail of spying both during and after the Arab Spring has not only helped protect activists from their government, it has also spurred the adoption of safer, more secure online communications methods—a benefit to all who desire private communications.

Today’s Issues in the Region Reflect What Changed, and What Was Lost

The Arab Spring was a turning point for free expression online. Across the region, people fought for digital rights, and in some cases, thanks to them, for the first time. From Pakistan and Syria to Iran and Egypt, residents defended and improved their human rights thanks to secure communications and censorship-free social media platforms. 

But even as new technology assisted the revolution, tech companies continued (and continue) to assist governments in the silencing, and surveillance, of their critics. Earlier this year, a group of activists, journalists, and human rights organizations sent an open letter to Facebook, Twitter, and YouTube, demanding that the companies stop silencing critical voices from the Middle East and North Africa. Noting that key activists and journalists throughout the Middle East and North Africa continue to be censored on the platforms, sometimes at the behest of other governments, the letter urges the companies to “end their complicity in the censorship and erasure of the oppressed communities’ narratives and histories,” and makes several important demands. 

In particular, the letter urges the companies to:

  • Engage with local users, activists, human rights experts, academics, and civil society;
  • Invest in the local and regional expertise to develop and implement context-based content moderation decisions;
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities;
  • Preserve restricted content related to cases arising from war and conflict zones that is made unavailable;
  • Provide greater transparency and notice when it comes to deletions and account takedowns, and offer meaningful and timely appeals for users, in accordance with the Santa Clara Principles

The Arab Spring may seem to present us with a conundrum. As more governments around the world have chosen authoritarianism in the days since, platforms have often contributed to repression. But the legacy of activists and citizens using social media to push for political change and social justice lives on. We now know that the Internet, and technology as a whole, can enable human rights. Today, more people than ever understand that digital rights are themselves human rights. But like all rights, they must be defended, fought for, and protected against those who would rather hold onto power than share it—whether they lead a government or a tech company.

Years ago, Bassel Safadi Khartabil, the Syrian open source developer, blogger, entrepreneur, hackerspace founder, and free culture advocate, wrote to EFF that “code is much more than tools. It's an education that opens youthful minds, and moves nations forward. Who can stop that? No-one.” In 2015, after several years of imprisonment without trial, Bassel was executed by the Syrian government. He is missed dearly, along with the many, many others who were killed in their fight for basic rights and freedoms. While the Arab Spring as it is commonly referred to may be over, for so many, it has never ended. Some day, we hope, it will. 

All photos credit Jillian C. York.

Twitter and Interoperability: Some Thoughts From the Peanut Gallery

EFF - Mon, 01/25/2021 - 3:04pm

Late in 2019, Twitter CEO Jack Dorsey floated "Project Blue Sky," a plan for an interoperable, federated, standardized Twitter that would let users (or toolsmiths who work on behalf of users) gain more control over their participation in the Twitter system. This was an exciting moment for us, a significant lurch towards an interoperable, decentralized social media world. We were especially excited to see Dorsey cite Mike Masnick's excellent Protocols, Not Products paper.

It’s been more than a year, and Twitter has marked its progress with an “ecosystem review” that sets out its view of the landscape of distributed systems. The world is a very different place today, with social media playing a very different role in 2021 than it played in 2019. Locked down, with the legitimacy of the US political system under threat, with social media moderation policies taking on national significance, the question of how a federated, decentralized internet would work has become even more salient...and urgent.

Wise Kings vs Autonomous Zones

It's a safe bet that no one is thrilled with the moderation policies of the Big Tech platforms (we sure aren't). But while we can all agree that tech has a moderation problem, there's a lot less consensus on what to do about it.

Broadly speaking, there are two broad approaches: the first is to fix the tech giants and the second is to fix the Internet.

The advocates for fixing Big Tech are implicitly saying that the net is a fait accompli, with the tech giants destined to reign over all forever. Fixing Big Tech means demanding that these self-appointed dictators be benevolent dictators, that the kings be wise kings. To that end, fix-big-tech people propose rules and structures for how platforms relate to their users: clear moderation policies, due process for the moderated, transparency and accountability.

By contrast, the fix-the-Internet people want a dynamic Internet, where there are lots of different ways of talking to your friends, organizing a political movement, attending virtual schools, exchanging money for goods and services, arguing about politics and sharing your art.

We're fix-the-Internet people. Sure, we want virtual spaces to be well-managed, accountable and transparent, but we also want there to be other places for users to go when they're not. Wise kings are great in theory, but in practice, they are just as fallible as any dunderhead. When the platform gets it wrong, you should be able to pick up and leave -- and still reach your friends, show and sell your art, and advocate for your causes.

That's where interoperability comes in. The problem isn't (merely) that the CEOs of giant tech companies are not suited to making choices that govern the digital lives of billions of people. It's that no one is suited to making those choices.

Switching Costs: Tear Down That Wall

Twitter has lots of competition today: there's Facebook, of course, but also the thousands of Mastodon and Diaspora instances that make up the "fediverse." If you don't like the moderation choices at Twitter, you can certainly take your social media business elsewhere.

But it's not quite that simple. If you like someone else's moderation policies better than Twitter's, you might still hang around on Twitter, because that's where all the people you want to talk to are hanging out. What's more, the people who you want to talk to are still on Twitter because you're there. It's a kind of mutual hostage-taking.

Economists call this a "network effect": the more people there are on Twitter, the more reason there is to be on Twitter and the harder it is to leave. But technologists have another name for this: "lock in." The more you pour into Twitter, the more it costs you to leave. Economists have a name for that cost: the "switching cost."

Network effects create lock-in through high switching costs.

But here's the corollary: lower switching costs reduce lock-in and neutralize network effects!

Let's unpack that. Berlin was once divided by a wall. People on the eastern side of the wall weren't allowed to leave. If you tried to leave, you had to abandon everything and everyone you loved, and if you made it, you might never see or talk to them again. The Wall was a big deal, but it was only the visible part of a much larger system that bound the people of East Germany. The soft controls -- the social and material costs of abandoning your old life -- were just as much of a disincentive to leave.

By comparison, if you want to move from Berlin to, say, Paris today, you can take the train to Paris and check it out. If you like it, you can have all your worldly goods moved there (even your appliances!) -- and you can video-conference, text, and email with your Berlin friends. If you're lucky enough to have a spare room in your Paris flat, your friends can come for weekends (if not, you can put them up on the sofa). You can tune into your hometown radio and TV channels from your screens, and read the daily Berlin papers at the same moment as they're made available in Berlin itself. And if it doesn't work out in the end, well, you can move back to Berlin.

These low switching costs mean that many people have experimented with living somewhere else. Some people move away for university and never come back, or try a work-abroad scheme and fall in love with a new city. Lots of people change their minds and go back home. They have choices. They can try a new life out before they commit to it, and even after they commit to it, they don't have to write off the life they lived before. They can have one foot in the old place and one foot in the new place. They can swing through the jungle -- but not let go of the vine they're holding onto until they've got the next one firmly in hand.

But once you commit to a social media platform, you're behind a digital wall. The platforms only recently started to allow you to bring your stuff -- your messages and address book -- with you if you decided to leave (and they only committed to this tiny bit of freedom once the EU started to regulate them).

One thing they definitely won't let you do is hop over their wall and then lob messages back over it. Facebook has repeatedly sued and threatened others merely for daring to create an alternate interface that aggregated messages from Facebook with messages from other services -- the idea of allowing Facebook users to talk to their friends on their own terms was too much for the company. It's hard to believe they'd tolerate a world where people who left Facebook could still talk to their friends and read messages in their groups.

The parallels between walled gardens and the Berlin Wall don't stop there: the East German government maintained that the Wall wasn't there to keep people from escaping; rather, they said it was there to stop westerners who longed for the East German lifestyle from pouring across the border. Today, Facebook insists that it blocks interoperability to keep privacy-plunderers out of its service -- not to trap its users inside.

Fix the Internet, Not the Platforms

Facebook has a point. Not a good point, but a point. After all, the company is under pressure from all quarters to fight bad speech on its platform -- not just illegal material, but also disinformation, misinformation, harassment and other unpleasant or frightening communications. The more ways there are to connect to the platform, the more ways there are to make mischief.

But we need to ask ourselves why people stick around on the big social media platforms despite the harassment, hate speech, and other unpleasant experiences. It's because of the mutual hostage-taking of lock in and the high switching costs of taking your social media activity elsewhere. You use your social networks because your friends are there. They're there in part because you're there. The cost of losing the connection to your friends (and, for artists, you audience; and for businesses, your customers) exceeds the benefits of fleeing to a rival.

Interoperability reduces that cost and makes it easier to switch. Specifically, it lets you switch to smaller platforms whose administrators can enforce policies that suit you, personally: platforms whose definitions of "hate speech" or "harassment" or "rudeness" meet your own standards. And critically, it allows you to make the switch without losing contact with the colleagues and friends you collaborate with, nor with the inspiring or amusing (or enraging) strangers you follow, nor with the fans who follow you to be inspired or amused (or enraged).

In an interoperable world, we still need to advocate for the platforms we use to adopt good rules and then uphold them -- but we have alternatives when the platforms let us down. If you can fix the Internet (by restoring choice), fixing the platforms isn’t nearly so urgent.

Interoperability's Downsides

There's a downside to interoperability. When the massive, monopolized platforms that dominate today's Internet wield their power for good, good things happen. For example, if a platform adopts strict privacy rules (like transparency reports, limits on third-party access to user data, bans on selling or mining user data, high legal standards for official content removal demands, and so on), and then actually follows its own rules, hundred of millions -- or even billions -- of users reap the benefits.

In a decentralized, interoperable Internet, it's much harder to enforce policies in ways that affect billions of people at once. That's the point, after all.

And if users can easily switch platforms without giving up access to their social circles, then platforms that countenance harmful or undesirable speech will accumulate users who enjoy that sort of thing.

But we need to distinguish between undesirable speech and illegal speech. The U.S. Constitution's First Amendment means that, in America, most speech is lawful (even odious speech, such as racist diatribes), but some communications (for example child sex abuse materials, or nonconsensual pornography) are not.

The present-day centralization of online communications means that even speech that is lawful can be removed from much of the Internet. If Facebook or Twitter ban a word or phrase or link, many people will never see it. If you agree with Facebook and Twitter's judgment, then this is great news. One thing that is increasingly apparent, though, is that most people -- irrespective of their political orientation -- are not fond of Big Tech's editorial judgment.

A decentralized, interoperable Internet is an explicit tradeoff: you lose the power that comes from convincing the platforms to eliminate the speech that disturbs or upsets you, but you gain the power to move to an online community where speech policies are to your liking...and you still get to talk to people who want different speech policies.

After all, most of us can't convince the platforms to bow to our demands for new speech policies.

But what about illegal speech? Fraud, nonconsensual pornography, serious incitements to violence? Well, these remain illegal, and courts and prosecutors (as well as private persons, in many cases) have the legal right to punish people who use platforms to spread this unlawful speech. What's more, depending on the kind of speech and the platform's complicity in it, the platform itself may share liability for criminal speech.

In an interoperable world, the decisions about what speech can't appear in a given community are made by the community members. The decisions about what speech can't appear anywhere are made by democratically elected lawmakers (not giant corporations that bow to some form of public pressure).

Likewise, in an interoperable world, we can’t rely on the platforms to protect our privacy. But they haven’t been very good at that so far. Instead, we will need to rely on privacy laws -- such as a federal privacy law, which is long overdue -- to protect us.

Everyone has some flavor of speech -- "icky speech" -- whose very existence is a source of distress, even if they never have to encounter it. Just knowing that it's out there is unbearable. When a broad public consensus emerges about that speech -- as with child sex abuse material -- it is prohibited by law. But if no consensus emerges, then each of us must bear the unbearable: we can avoid the speech, we can shun or criticize the people who engage in it, but we can't reasonably expect that the speech will cease, no matter how badly it makes us feel.

Twitter's Unique Position (Number Two)

Twitter has 340 million users. That's a big number.

Facebook has 2.6 billion users.

As Mark Zuckerberg has reminded us, Facebook's moderation staff exceeds Twitter's entire headcount. No wonder that Zuckerberg has called for a set of rules for what speech he should block on Facebook: even with his army of moderators, he'll fail to live up to those rules, but everyone else will fail much, much worse, clearing the field of Facebook competitors and miring us all permanently in Facebook's choices about what is and is not acceptable speech.

Twitter is big, but they're an order of magnitude smaller than the market-leader. They have a lot of resources, capital and users -- but they could easily be swallowed up or forced out by the giant that dominates the field.

That puts Twitter in a unique position: big enough to do something ambitious and technically expensive, precarious enough to take the risk.

Thus far, "Project Blue Sky" is just a lofty goal. It remains to be seen whether Twitter will live up to it. But while optimism is in (justifiably) short supply in this fraught moment, an interoperable social Internet has never been needed more. Social media companies make errors, just like all of us, but their vast number of locked-in users makes these companies’ errors more consequential than any blunder any of us might make. It’s fine to demand better judgment from the tech companies - but it’s far more important to reduce the consequences of their inevitable screw-ups, and to do that, we need to take away their power. Interoperability moves power from corporate board-rooms to tinkerers, co-ops, nonprofits, startups, and the users that they serve.

EFF’s Top Recommendations for the Biden Administration

EFF - Thu, 01/21/2021 - 7:16pm

At noon on January 20, 2021, Joseph R. Biden, Jr. was sworn in as the 46th President of the United States, and he and his staff took over the business of running the country.

The tradition of a peaceful transfer of power is as old as the United States itself. But by the time most of us see this transition on January 20th, it is mostly ceremonial. The real work of a transition begins months before, usually even before Election Day, when presidential candidates start thinking about key hires, policy goals, and legislative challenges. After the election, the Presidential Transition Act provides the president-elect’s team with government resources to lay the foundation for the new Administration’s early days in office. Long before the inauguration ceremony, the president-elect’s team also organizes meetings with community leaders, activists, and non-profits like EFF during this time, to hear about our priorities for the incoming Administration.

In anticipation of these meetings, EFF prepared a transition memo for the incoming Biden administration, outlining our recommendations for how it should act to protect everyone’s civil liberties in a digital world. While we hope to work with the new Administration on a wide range of policies that affect digital rights in the coming years, this memo focuses on the areas that need the Administration’s immediate attention. In many cases, we ask that the Biden Administration change course from the previous policies and practices.

We look forward to working with the new Biden Administration and the new Congress to implement these important ideas and safeguards.

Read EFF’s transition memo.

Political Satire Is Protected Speech – Even If You Don’t Get the Joke

EFF - Wed, 01/20/2021 - 6:34pm

This blog post was co-written by EFF Legal Fellow Houston Davidson.

Should an obviously fake Facebook post—one made as political satire—end with a lawsuit and a bill to pay for a police response to the post? Of course not, and that’s why EFF filed an amicus brief in Lafayette City v. John Merrifield.

In this case, Merrifield made an obviously fake Facebook event satirizing right-wing hysteria about Antifa. The announcement specifically poked fun at the well-established genre of fake Antifa social media activity, used by some to drum up anti-Antifa sentiment. However, the mayor of Lafayette didn’t get the joke, and now Lafayette City officials want Merrifield to pay for the costs of policing the fake event.

In EFF’s amicus brief filed in support of Merrifield in the Louisiana Court of Appeal, we trace the rise and proliferation of obviously parodic fake events. These events range from fake concerts (“Drake live at the Cheesecake Factory”) to quixotic events designed to forestall natural disasters (“Blow Your Saxophone at Hurricane Florence”) to fake destruction of local monuments (“Stone Mountain Implosion”). This kind of fake event is a form of online speech. It’s a crucial form of social commentary whether it makes people laugh, builds resilience in the face of absurdity, or criticizes the powerful.

EFF makes a two-pronged legal argument. First, the First Amendment clearly protects the kind of satirical speech that Merrifield has made. Political satire and other parodic forms are an American tradition that goes back to the country’s founding. The amicus brief explains that Facetious speech may be frivolously funny, sharply political, and everything in between, and it is all fully protected by the First Amendment, even when not everybody finds it humorous.” Even when the speech offends, it may still claim constitutional protection. While the State of Louisiana may not have seen the parody, the First Amendment still applies. Second, parodic speech, by its very definition, has no intent to cause the specific serious harm that the charge of incitement or similar criminal liabilities requires. After all, it was meant to make a point about online misinformation.

We hope the appeals court follows the law, and rejects the state’s case.

Blyncsy’s Patent On Contact Tracing Isn’t A Medical Breakthrough, It’s A Patent Breakdown

EFF - Wed, 01/20/2021 - 4:21pm
Stupid Patent of the Month

Contact tracing is critical for limiting the spread of a contagion like COVID-19, but that doesn’t mean it’s inventive to compare people’s locations using their smartphones. Rather, it’s all the more important to protect the basic methods of public health from bogus patent claims.

The CDC recommends contact tracing for close contacts of any “confirmed or probable” COVID-19 patients. But doing the job has been left to state and local public health officials. 

In addition to traditional contact tracing by public health workers, some are using “proximity apps” to track when people are near one another. These apps should be used only with careful respect for user privacy. And they can’t be a substitute for contact tracing done by public health workers. But, if used carefully, such apps could be a useful complement to traditional contact tracing.  

Unless someone with an absurdly broad patent stops them.

In April, while many states were still under the first set of shelter-in-place orders, a Utah company called Blyncsy was building a website to let government agencies know that it wanted to get paid—not for giving them any product, but simply for owning a patent they’d managed to get issued in 2019. In news reports and its own press release, Blyncsy said that anyone doing cellphone-based contact tracing would need to license its patent, U.S. Patent No. 10,198,779, “Tracking proximity relationships and uses thereof.” 

By September, Blyncsy had begun demanding $1 per resident payments from states that released contact tracing apps, including Pennsylvania, North Dakota, South Dakota, and Virginia.  

“State governments have taken it upon themselves to roll out a solution in their name in which they’re using our property without compensation,” Blyncsy CEO Mark Pittman told Wired. 

The developer behind North Dakota and Wyoming’s contact tracing apps told Wired that Blyncsy’s threatened patent battle could “put a freeze on new states rolling out apps.” 

Given Blyncsy’s $1 per person price point, a state like Virginia, with more than 8 million residents, would be vastly overpaying for contact tracing technology. That state’s app cost $229,000 to develop.  

Contact Tracing is Very Good And Very Old  

Unfortunately, our patent system encourages the acquisition and use of bad patents like the one owned by Blyncsy. The company’s patent essentially claims technology that’s well over a century old, and could be performed with pencil and paper. Simply adding a smartphone into the works—a “mobile computing device,” as Blyncsy’s patent describes it—doesn’t make the patent valid.  

And the patent’s primary claim doesn’t have any technology in it at all. It’s a pure “business method” patent, except that the only business it appears to support is demanding money from state public health departments. Simplifying the language of Claim 1, it describes: 

  • Receiving data about the location of a first person, who “has a contagion” 
  • Receiving data about the location of a second person
  • Determining if they’re close together 
  • Determining when they were close together
  • Determining whether the “proximity relationship” is positive or negative 
  • Labeling the second person as being either contaminated or not 

That’s just contact tracing, a process that has been done with simple writing and human interviews since at least the mid-19th century. In a famous breakthrough for the field of epidemiology, British doctor John Snow traced an 1854 cholera epidemic near London to a specific water pump on Broad Street. 

Patent documents have even more specific descriptions of Blyncsy’s claimed method. Last week, Unified Patents published some prior art for the Blyncsy patent, including a 2003 patent application directed to an “electronic proximity apparatus” that alerts individuals when they are exposed to a contagious disease.

Doing research, writing down the results, and trying to figure out how people got sick isn’t a patentable invention—it’s just basic science. The patent’s other claims are no more impressive. The issuance of the Blyncsy patent isn’t a success story, it’s evidence of glaring flaws in our patent system.  

Too often, patents are issued that don’t embody any innovation. These simply become a 20-year-long invitation for a company to demand payment for the innovation and work of others. Sometimes that’s other companies, and sometimes it’s our own governments. In this case, Blyncsy is trying to make a buck by levying what amounts to a tax—but one that only benefits a single, for-profit company. 

Unfortunately, we’ve seen patent holders repeatedly try to take advantage of the pandemic. Last year, patent troll Labrador Diagnostics tried to use old patents belonging to the notorious firm Theranos to sue a real firm making COVID-19 tests. Another patent troll sued a ventilator manufacturer. A health crisis is time to carefully limit, not expand, the power of patents. 

Oakland’s Progressive Fight to Protect Residents from Government Surveillance

EFF - Wed, 01/20/2021 - 2:26pm

The City of Oakland, California, has once again raised the bar on community control of police surveillance. Last week, Oakland's City Council voted unanimously to strengthen the city's already groundbreaking Surveillance and Community Safety Ordinance. The latest amendment, which immediately went into effect, adds prohibitions on Oakland's Police Department using predictive policing technology—which has been shown to amplify existing bias in policing—as well as a range of privacy-invasive biometric surveillance technologies, to the city’s existing ban on government use of face recognition.

Oakland is now (as far as we know) the first city to ban government use of voice recognition technology. This is an important step, given the growing proliferation of this invasive form of surveillance, at home and abroad. Oakland also may be the first city to ban government use of any technology that can identify a person based on “physiological, biological, or behavioral characteristics ascertained from a distance.” This would include recognition of a person’s distinctive gait or manner of walking.

Last June, the California city of Santa Cruz became the first U.S. city to ban its police department from using predictive policing technology. Just a few weeks ago, New Orleans, Louisiana, also prohibited the technology, as well as certain forms of biometric surveillance. With last week's amendments, Oakland became the largest city to ban predictive policing.

Oakland is also the first city to incorporate these prohibitions into a more comprehensive Community Control of Police Surveillance (CCOPS) framework. Its ordinance not only prohibits these particularly pernicious forms of surveillance, but also ensures that police cannot obtain any other forms of surveillance technology absent permission from the city council following input from residents. This guarantees transparency, accountability, and community engagement before any privacy-invasive surveillance technology can be adopted by local government agencies.

Last week marks the second significant update to Oakland's CCOPS ordinance since it went into effect in 2018. One year later, the law was amended to ban government use of face recognition. At the time, Oakland was one of only three cities to ban the technology. Neighboring San Francisco was the first, having included its ban in its own CCOPS ordinance. Since then, over a dozen U.S. cities have followed suit. Now federal legislation seeks to bar federal agencies from using the technology. The bill is sponsored by Senators Markey and Merkley and Representatives Ayanna Pressley, Pramila Jayapal, Rashida Tlaib, and Yvette Clarke.

The recent amendments to Oakland’s surveillance ordinance were sponsored by the city’s Privacy Advisory Commission, which is tasked with developing and advising on citywide privacy concerns. In a blog published by Secure Justice—a Bay Area non-profit lead by the Commission's chair, Brian Hofer—Sameena Usman, of the San Francisco office of the Council on American-Islamic Relations (CAIR) explained: "Not only are these methods intrusive and don't work, they also have a disproportionate impact on Black and brown communities – leading to over-policing." EFF joined 16 other groups in supporting the new law.

EFF applauds the Commission and Oakland's City Council for taking this groundbreaking step advancing public safety and protecting civil liberties. Predictive policing and biometric surveillance technologies have a harmfully disparate impact against people of color, immigrants, and other vulnerable populations, as well as a chilling effect on our fundamental First Amendment freedoms.  

Why EFF Doesn’t Support Bans On Private Use of Face Recognition

EFF - Wed, 01/20/2021 - 11:29am

Government and private use of face recognition technology each present a wealth of concerns. Privacy, safety, and amplification of carceral bias are just some of the reasons why we must ban government use.

But what about private use? It also can exacerbate injustice, including its use by police contractors, retail establishments, business improvement districts, and homeowners.

Still, EFF does not support banning private use of the technology. Some users may choose to deploy it to lock their mobile devices or to demonstrate the hazards of the tech in government hands. So instead of a prohibition on private use, we support strict laws to ensure that each of us is empowered to choose if and by whom our faceprints may be collected. This requires mandating informed opt-in consent and data minimization, enforceable through a robust private right of action.

Illinois has had such a law for more than a decade. This approach properly balances the human right to control technology—both to use and develop it, and to be free from other people’s use of it.

The menace of all face recognition technology

Face recognition technology requires us to confront complicated questions in the context of centuries-long racism and oppression within and beyond the criminal system. 

Eighteenth-century “lantern laws” requiring Black and indigenous people to carry lanterns to illuminate themselves at night are one of the earliest examples of a useful technology twisted into a force multiplier of oppression. Today, face recognition technology has the power of covert bulk collection of biometric data that was inconceivable less than a generation ago.

Unlike our driver’s licenses, credit card numbers, or even our names, we cannot easily change our faces. So once our biometric data is captured and stored as a face template, it is largely indelible. Furthermore, as a CBP vendor found out when the face images of approximately 184,000 travelers were stolen, databases of biometric information are ripe targets for data thieves.

Face recognition technology chills fundamental freedoms. As early as 2015, the Baltimore police used it to target protesters against police violence. Its threat to essential liberties extends far beyond political rallies. Images captured outside houses of worship, medical facilities, community centers, or our homes can be used to infer our familial, political, religious, and sexual relationships. 

Police have unparalleled discretion to use violence and intrude on liberties. From our nation’s inception, essential freedoms have not been equally available to all, and the discretion vested in law enforcement only accentuates these disparities. One need look no further than the Edward Pettus Bridge or the Church Committee.  In 2020, as some people took to the streets to protest police violence against Black people, and others came out to protest COVID-19 mask mandates, police enforced  their authority in starkly different manners. 

Face recognition amplifies these police powers and aggravates these racial disparities.

Each step of the way, the private sector has contributed. Microsoft, Amazon, and IBM are among the many vendors that have built face recognition for police (though in response to popular pressure they have temporarily ceased doing so). Clearview AI continues to process faceprints of billions of people without their consent in order to help police identify suspects. Such companies ignore the human right to make informed choices over the collection and use of biometric data, and join law enforcement in exacerbating long-established inequities.

The fight to end government use of face recognition technology 

In the hands of government, face recognition technology is simply too threatening. The legislative solution is clear: ban government use of this technology.

EFF has fought to do so, alongside our members, Electronic Frontier Alliance allies, other national groups like the ACLU, and a broad range of concerned community-based groups. We supported the effort to pass San Francisco’s historic 2018 ordinance. We’ve also advocated before city councils in California (Berkeley and Oakland), Massachusetts (Boston, Brookline, and Somerville), and Oregon (Portland).

Likewise, we helped pass California’s AB 1215. It establishes a three-year moratorium on the use of face recognition technology with body worn police cameras. AB 1215 is the nation’s first state-level face recognition prohibition. We also joined with the ACLU and others in demanding that manufacturers stop selling the technology to law enforcement

In 2019, we launched our About Face campaign. The campaign supplies model language for an effective ban on government use of face recognition, tools to prepare for meeting with local officials, and a library of existing ordinances. Members of EFF’s organizing team also work with community-based members of the Electronic Frontier Alliance to build local coalitions, deliver public comment at City Council hearings, and build awareness in their respective communities. 

The promise of emerging technologies

EFF works to ensure that technology supports freedom, justice, and innovation for all the people of the world. In our experience, when government tries to take technology out of the hands of the public, it is often serving the interests of the strong against the weak. Thus, while it is clear that we must ban government use of face recognition technology, we support a different approach to private use.

As Malkia Cyril wrote in a 2019 piece for McSweeney’s The End of Trust titled “Watching the Black Body”: “Our twenty-first-century digital environment offers Black communities a constant pendulum swing between promise and peril.” High-tech profiling, policing, and punishment exacerbate the violence of America’s race-based caste system. Still, other technologies have facilitated the development of intersectional and Black-led movements for change, including encrypted communication, greater access to video recording, and the ability to circumvent the gatekeepers of traditional media.

Encryption and Tor secure our private communications from snooping by stalkers and foreign nations. Our camera phones help us to document police misconduct against our neighbors. The Internet allows everyone to (in the words of the U.S. Supreme Court) “become a town crier with a voice that resonates farther than it could from any soapbox.” Digital currency holds the promise of greater financial privacy and independence from financial censorship.

EFF has long resisted government efforts to ban or hobble the development of such novel technologies. We stand with police observers against wrongful arrest for recording on-duty officers, with sex workers against laws that stop them from using the Internet to share safety information with peers, and with cryptographers against FBI attempts to weaken encryption. Emerging technologies are for everyone.

Private sector uses of face recognition

There are many menacing ways for private entities to use face recognition. Clearview AI, for example, uses it to help police departments identify and arrest protesters.

It does not follow that all private use of face recognition technology undermines human rights. For example, many people use face recognition to lock their smartphones. While passwords provide stronger protection, without this option some users might not secure their devices at all.

Research and anti-surveillance scholars have made good use of the technology. 2018 saw groundbreaking work from then-MIT researcher Joy Boulamwini and Dr. Timnit Gebru, exposing face recognition’s algorithmic bias. That same year, the ACLU utilized Amazon’s Rekognition system to demonstrate the disparate inefficacy of a technology that disproportionately misidentified congressional representatives of color as individuals in publicly accessible arrest photos. U.S. Sen. Edward Markey (D-MA), one of the lawmakers erroneously matched despite being part of the demographic the technology identifies most accurately, went on to introduce the Facial Recognition and Biometric Technology Moratorium Act of 2020

These salutary efforts would be illegal in a state or city that banned private use of face recognition. Even a site-specific ban—for example, just in public forums like parks, or public accommodations like cafes—would limit anti-surveillance activism in those sites. For example, it would bar a live demonstration at a rally or lecture of how face recognition works on a volunteer.

Strict limits on private use of face recognition

While EFF does not support a ban on private use of face recognition, we do support strict limits. Specifically, laws should subject private parties to the following rules:

  1. Do not collect a faceprint from a person without their prior written opt-in consent.
  2. Likewise, do not disclose a person’s faceprint without their consent.
  3. Do not retain a faceprint after the purpose of collection is satisfied, or after a fixed period of time, whichever is sooner.
  4. Do not sell faceprints, period.
  5. Securely store faceprints to prevent their theft or misuse.

All privacy laws need effective enforcement. Most importantly, a person who discovers that their privacy rights are being violated must have a “private right of action” so they can sue the rule-breaker.

In 2008, Illinois enacted the first law of this kind: the Illinois Biometric Information Privacy Act (BIPA). A new federal bill sponsored by U.S. Senators Jeff Merkley and Bernie Sanders seeks to implement BIPA on a nationwide basis.

We’d like this kind of law to contain two additional limits. First, private entities should be prohibited from collecting, using, or disclosing a person’s faceprint except as necessary to give them something they asked for. This privacy principle is often calledminimization.” Second, if a person refuses to consent to faceprinting, a private entity should be prohibited from retaliating against them by, for example, charging a higher price or making them stand in a longer line. EFF opposes “pay for privacy” schemes that pressure everyone to surrender their privacy rights, and contribute to an income-based society of privacy “haves” and “have nots.” Such laws should also have an exemption for newsworthy information.

EFF has long worked to enact more BIPA-type laws, including in Congress and Montana. We regularly advocate in Illinois to protect BIPA from legislative backsliding. We have filed amicus briefs in a federal appellate court and the Illinois Supreme Court to ensure strong enforcement of BIPA, and in an Illinois trial court against an ill-conceived challenge to the law’s constitutionality.

Illinois’ BIPA enjoys robust enforcement. For example, Facebook agreed to pay its users $650 million to settle a case challenging non-consensual faceprinting under the company’s “tag suggestions” feature. Also, the ACLU recently sued Clearview AI for its non-consensual faceprinting of billions of people. Illinoisans have filed dozens of other BIPA suits.

Government entanglement in private use of face recognition

EFF supports one rule for government use of face surveillance (ban it) and another for private use (strictly limit it). What happens when government use and private use overlap?

Some government agencies purchase faceprints or faceprinting services from private entities. Government should be banned from doing so. Indeed, most bans on government use of face recognition properly include a ban on government obtaining or using information derived from a face surveillance system.

Some government agencies own public forums, like parks and sidewalks, that private entities use for their own expressive activities, like protests and festivals. If community members wish to incorporate consensual demonstrations of facial recognition into their expressive activity, subject to the strict limits above, they should not be deprived of the right to do so in the public square. Thus, when a draft of Boston’s recently enacted ban on government use of face recognition also prohibited issuance of permits to third parties to use the technology, EFF requested and received language limiting this to when third parties are acting on behalf of the city.

Some government agencies own large restricted spaces, like airports, and grant other entities long-term permission to operate there. EFF opposes government use of face recognition in airports. This includes by the airport authority itself; by other government agencies operating inside the airport, including federal, state, and local law enforcement; and by private security contractors. This also includes a ban on government access to information derived from private use of face recognition. On the other hand, EFF does not support a ban on private entities making their own use of face recognition inside airports, subject to the strict limits above. This includes use by travelers and airport retailers. Some anti-surveillance advocates have sought more expansive bans on private use.

Private use of face recognition in places of public accommodation

A growing number of retail stores and other places of public accommodation are deploying face recognition to identify all people entering the establishment. EFF strongly opposes this practice. When used for theft prevention, it has a racially disparate impact: the photo libraries of supposedly “suspicious” people are often based upon unfair encounters between people of color and law enforcement or corporate security. When used to micro-target sales to customers, this practice intrudes on privacy: the photo libraries of consumers, correlated to their supposed preferences, are based on corporate surveillance of online and offline activity.

The best way to solve this problem is with the strict limits above. For example, the requirement of written opt-in consent would bar a store from conducting face recognition on everyone who walks in.

On the other hand, EFF does not support bans on private use of face recognition within public accommodations. The anti-surveillance renter of an assembly hall, or the anti-surveillance owner of a café, might wish to conduct a public demonstration of how face recognition works, to persuade an audience to support legislation that bans government use of face recognition. Thus, while EFF supported the recently enacted ban on government use of face recognition in Portland, Oregon, we did not support that city’s simultaneously enacted ban on private use of face recognition within public accommodations.

U.S. businesses selling face recognition to authoritarian regimes 

Corporations based in the United States have a long history of selling key surveillance technology, used to aid in oppression, to authoritarian governments. This has harmed journalists, human rights defenders, ordinary citizens, and democracy advocates. The Chinese tech corporation Huawei reportedly is building face recognition software to notify government authorities when a camera identifies a Uyghur person. There is risk that, without effective accountability measures, U.S. corporations will sell face recognition technology to authoritarian governments.

In collaboration with leading organizations tracking the sale of surveillance technology, we recently filed a brief asking the U.S. Supreme Court to uphold the ability of non-U.S. residents to sue U.S. corporations that aid and abet human rights abuses abroad. The case concerns the Alien Torts Statute, which allows foreign nationals to hold U.S.-based corporations accountable for their complicity in human rights violations.

To prevent such harms before they occur, businesses selling surveillance technologies to governments should implement a “Know Your Customer” framework of affirmative investigation before any sale is executed or service provided. Significant portions of this framework are already required through the Foreign Corrupt Practices Act and export regulations. Given the inherent risks associated with face recognition, U.S. companies should be especially wary of selling this technology to foreign states.

Fight back

EFF will continue to advocate for bans on government use of face recognition, and adoption of legislation that protects each of us from non-consensual collection of our biometric information by private entities. You can join us in this fight by working with others in your area to advocate for stronger protections and signing our About Face petition to end face government use of face recognition technology.

New OCC Rule Is a Win in the Fight Against Financial Censorship

EFF - Tue, 01/19/2021 - 9:00pm

On Thursday, the Office of the Comptroller of the Currency finalized its Fair Access to Financial Services rule, which will prevent banks from refusing to serve entire classes of customers that they find politically or morally unsavory. The rule is a huge win for civil liberties, and for the many sectors who have found themselves in the bad graces of corporate financial services, like cryptocurrency projects, marijuana businesses, sex worker advocacy groups, and others.

For years, financial intermediaries have engaged in financial censorship, shutting down accounts in order to censor legal speech. For example, banks have refused to serve entire industries on the basis of political disagreement, and other financial intermediaries have cut off access to financial services for independent booksellers, social networks, and whistleblower websites, even when these websites are engaged in First Amendment-protected speech. 

Banks have refused to serve entire industries on the basis of political disagreement

For the organizations losing access to financial services, this censorship can disrupt operations and, in some cases, have existential consequences. For that reason, financial censorship can affect free expression. As just one example, in Backpage.com, LLC v. Dart, a county sheriff embarked on a campaign to crush a website by demanding that payment processors prohibit the use of their credit cards to purchase ads on the site. The Seventh Circuit court of appeals held that the sheriff’s conduct violated the First Amendment and noted that the sheriff had attacked the website “not by litigation but instead by suffocation, depriving the company of ad revenues by scaring off its payments-service providers.” As EFF explained in our amicus brief in that case, “[like] access to Internet connectivity, access to the financial system is a necessary precondition for the operations of nearly every other Internet intermediary, including content hosts and platforms. The structure of the electronic payment economy . . . make these payment systems a natural choke point for controlling online content.” In that case, the Seventh Circuit analogized shutting down financial services to “killing a person by cutting off his oxygen supply rather than by shooting him.” 

Innovators in the cryptocurrency space are all too familiar with these problems. For years, they have struggled to access basic financial services—regardless of how successful, promising, or financially sound the particular company is. The Blockchain Association highlighted this problem in its comments supporting the OCC’s new rule, noting that in some cases individuals working at legal U.S. cryptocurrency businesses have been denied personal banking services because of their employment. The difficulty in accessing these financial services has been a huge barrier for cryptocurrency entrepreneurs, whose time and energy would be better spent innovating rather than trying to figure out how to get a bank account. 

The OCC’s new rule will help address this. It will limit the ability of certain financial institutions like large, national banks to categorically refuse service to a class of customers. Under the new rule, banks will still be empowered to refuse to serve certain customers, but must use individual, quantifiable, customer-by-customer risk assessment, rather than refusing to serve an entire industry. We hope that, under this regulation, historically underserved businesses will be able to access the financial services they need, and won’t lose their financial lifelines solely based on the whims and subjective moral standards of bank executives. 

So-called “Consent Searches” Harm Our Digital Rights

EFF - Thu, 01/14/2021 - 7:56pm

Imagine this scenario: You’re driving home. Police pull you over, allegedly for a traffic violation. After you provide your license and registration, the officer catches you off guard by asking: “Since you’ve got nothing to hide, you don’t mind unlocking your phone for me, do you?” Of course, you don’t want the officer to copy or rummage through all the private information on your phone. But they’ve got a badge and a gun, and you just want to go home. If you’re like most people, you grudgingly comply.

Police use this ploy, thousands of times every year, to evade the Fourth Amendment’s requirement that police obtain a warrant, based on a judge’s independent finding of probable cause of crime, before searching someone’s phone. These misleadingly named “consent searches” invade our digital privacy, disparately burden people of color, undermine judicial supervision of police searches, and rest on a legal fiction.

Legislatures and courts must act. In highly coercive settings, like traffic stops, police must be banned from conducting “consent searches” of our phones and similar devices.

In less-coercive settings, such “consent searches” must be strictly limited. Police must have reasonable suspicion that crime is afoot. They must collect and publish statistics about consent searches, to deter and detect racial profiling. The scope of consent must be narrowly construed. And police must tell people they can refuse.

Other kinds of invasive digital searches currently rest on “consent,” too. Schools use it to search the phones of minor students. Police also use it to access data from home internet of things (IoT) devices, like Amazon Ring doorbell cameras, that are streamlined for bulk police requests. Such “consent” requests must also be limited.

“Consent” Is a Legal Fiction

The “consent search” end-run around the warrant requirement rests on a legal fiction: that people who say “yes” to an officer’s demand for “consent” have actually consented. The doctrinal original sin is Schneckloth v. Bustamonte (1973), which held that “consent” alone is a legal basis for search, even if the person searched was not aware of their right to refuse. As Justice Thurgood Marshall explained in his dissent:

All the police must do is conduct what will inevitably be a charade of asking for consent. If they display any firmness at all, a verbal expression of assent will undoubtedly be forthcoming.

History has proven Justice Marshall right. Field data show that the overwhelming majority of people grant “consent.” For example, statistics on all traffic stops in Illinois, for 2015, 2016, 2017, and 2018, show that about 85% of white drivers and about 88% of minority drivers grant consent.

Lab data show the same. For example, a 2019 study in the Yale Law Journal, titled “The Voluntariness of Voluntary Consent,” asked each participant to unlock their phone for a search. Compliance rates were 97% and 90%, in two cohorts of about 100 people each.

The study separately asked other people whether a hypothetical reasonable person would agree to unlock their phone for a search. These participants were not themselves asked for consent. Some 86% and 88% of these two cohorts (again about 100 participants each) predicted that a reasonable person would refuse to grant consent. The authors observed that this “empathy gap” appears in many social psychology experiments on obedience. They warned that judges in the safety of their chambers may assume that motorists stopped by police feel free to refuse search requests--when motorists in fact don’t.

Why might people comply with search requests from police when they don’t want to? Many are not aware they can refuse. Many others reasonably fear the consequences of refusal, including longer detention, speeding tickets, or even further escalation, including physical violence. Further, many savvy officers use their word choice and tone to dance on the line between commands—which require objective suspicion—and requests—which don’t.

“Consent Searches” Are Widespread

In October 2020, Upturn published a watershed study about police searches of our phones, called “Mass Extraction.” It found that more than 2,000 law enforcement agencies, located in all 50 states, have purchased surveillance technology that can conduct “forensic” searches of our mobile devices. Further, police have used this tech hundreds of thousands of times to extract data from our phones.

The Upturn study also found that police based many of these searches on “consent.” For example, consent searches account for  38% of all cell phone searches in Anoka County, Minnesota; about one-third in Seattle, Washington; and 18% in Broward County, Florida.

Far more common are “manual” searches, where officers themselves scrutinize the data in our phones, without assistance from external software. For example, there was a ten-to-one ratio of manual searches to forensic searches by U.S. Customs and Border Protection in fiscal year 2017. Manual searches are just as threatening to our privacy. Police access virtually the same data (except some forensic searches recover “deleted” data or bypass encryption). Also, it is increasingly easy for police to use a phone’s built-in search tools to locate pertinent data. As with forensic searches, it is likely that a large portion of manual searches are by “consent.”

 “Consent Searches” Invade Privacy

Phone searches are extraordinarily invasive of privacy, as the U.S. Supreme Court explained in Riley v. California (2014). In that case, the Court held that an arrest alone does not absolve police of their ordinary duty to obtain a warrant before searching a phone. Quantitatively, our phones have “immense storage capacity,” including “millions of pages of text.” Qualitatively, they “collect in one place many distinct types of information – an address, a note, a prescription, a bank statement, a video – that reveal much more in combination than any isolated record.” Thus, phone searches “bear little resemblance” to searches of containers like bags that are “limited by physical realities.” Rather, phone searches reveal the “sum of an individual’s private life.”

“Consent Searches” Cause Racial Profiling

There is a greater risk of racial and other bias, intentional or implicit, when decision-makers have a high degree of subjective discretion, compared to when they are bounded by objective criteria. This occurs in all manner of contexts, including employment and law enforcement.

Whether to ask a person for “consent” to search is a high-discretion decision. The officer needs no suspicion at all and will almost always receive compliance.

Predictably, field data show racial profiling in “consent searches.” For example, the Illinois State Police (ISP) in 2019 were more than twice as likely to seek consent to search the cars of Latinx drivers compared to white drivers, yet more than 50% more likely to find contraband when searching the cars of white drivers compared to Latinx drivers. ISP data show similar racial disparities in other years.

When it comes to phone searches, it is highly likely that police likewise seek “consent” more often from people of color. Especially during our growing national understanding that Black Lives Matter, we must end police practices like these that unfairly burden people of color.

“Consent Searches” Undermine Judicial Review of Policing

Judges often examine warrantless searches by police. This occurs in criminal cases, if the accused moves to suppress evidence, and in civil cases, if a searched person sues the officer. Either way, the judge may analyze whether the officer had the requisite level of suspicion. This incentivizes officers to turn square corners while out in the field. And it helps ensure enforcement of the Fourth Amendment’s ban on “unreasonable” searches.

Police routinely evade this judicial oversight through the simple expedient of obtaining “consent” to search. The judge loses their power to investigate whether the officer had the requisite level of suspicion. Instead, the judge may only inquire whether “consent” was genuine.

Given all the problems discussed above, what should be done about police “consent searches” of our phones?

Ban “Consent Searches” in High-Coercion Settings

Legislatures and judges should bar police from searching a person’s phone or similar electronic devices based on consent when the person is in a high-coercion setting. This rule should apply during traffic stops, sidewalk detentions, home searches, station house arrests, and any other encounters with police where a reasonable person would not feel free to leave. This rule should apply to both manual and forensic searches.

Upturn’s 2020 study called for a ban on “consent searches” of mobile devices. It reasons that “the power and information asymmetries of cellphone consent searches are egregious and unfixable,” so “consent” is “essentially a legal fiction.” Further, consent searches are “the handmaiden of racial profiling.”

Civil rights advocates, such as the ACLU, have long demanded bans on “consent searches” of vehicles or persons during traffic stops. In 2003, the ACLU of Northern California settled a lawsuit with the California Highway Patrol, placing a three-year moratorium on consent searches. In 2010, the ACLU of Illinois petitioned the U.S. Department of Justice to ban the use of consent searches by the Illinois State Police. In 2018, the ACLU of Maryland supported a bill to ban them. Of course, searches of phones are even more privacy-invasive than searches of cars.

Strictly Limit “Consent Searches” in Less-Coercive Settings

Outside of high-coercion settings, some people may have a genuine interest in letting police inspect some of the data in their phones. An accused person might wish to present geolocation data showing they were far from the crime scene. A date rape survivor might wish to present a text message from their assailant showing the assailant’s state-of-mind. In less-coercive settings, consent to present such data is more likely to be truly voluntary.

Thus, it is not necessary to ban “consent searches” in less-coercive settings. But even in such settings, legislatures and courts must impose strict limits.

First, police must have reasonable suspicion that crime is afoot before conducting a consent search of a phone. Nearly two decades ago, the Supreme Courts of New Jersey and Minnesota imposed this limit on consent searches during traffic stops. So does a Rhode Island statute. This rule limits the subjective discretion of officers, and thus the risk of racial profiling. Further, it ensures courts may evaluate whether the officer had a criminal predicate before invading a person’s privacy.

Second, police must collect and publish statistics about consent searches of electronic devices, to deter and detect racial profiling. This is a common practice for police searches of pedestrians, motorists, and their effects. Then-State Senator Barack Obama helped pass the Illinois statute that requires this during traffic stops. California and many other states have similar laws.

Third, police and reviewing courts must narrowly construe the scope of a person’s consent to search their device. For example, if a person consents to a search of their recent text messages, a police officer must be barred from searching their older text messages, as well as their photos or social media. Otherwise, every consent search of a phone will turn into a free-ranging inquisition into all aspects of a person’s life. For the same reasons, EFF advocates for a narrow scope of device searches even pursuant to a warrant.

Fourth, before an officer searches a person’s phone by consent, the officer must notify the person of their legal right to refuse. The Rhode Island statute requires this warning, though only for youths. This is analogous to the famous Miranda warning about the right to remain silent.

Other Kinds of “Consent Searches”

Of course, consent searches by police of our phones are not the only kind of consent searches that threaten our digital rights.

For example, many public K-12 schools search students’ phones by “consent.” Indeed, some schools use forensic technology to do so. Given the inherent power imbalance between minor students and their adult teachers and principals, limits on consent searches by schools must be at least as privacy-protective as those presented above.

Also, some companies have built home internet of things (IoT) devices that facilitate bulk consent requests from police to residents. For example, Amazon Ring has built an integrated system of home doorbell cameras that enables local police, with the click of a mouse, to send residents a message requesting footage of passersby, neighbors, and themselves. Once police have the footage, they can use and share it with few limits. Ring-based consent requests may be less coercive than those during a home search. Still, the strict limits above must apply: reasonable suspicion, publication of aggregate statistics, narrow construction of the scope of consent, and notice of the right to withhold consent.

It’s Business As Usual At WhatsApp

EFF - Thu, 01/14/2021 - 6:44pm

WhatsApp users have recently started seeing a new pop-up screen requiring them to agree to its new terms and privacy policy by February 8th in order to keep using the app. 

The good news is that, overall, this update does not make any extreme changes to how WhatsApp shares data with its parent company Facebook. The bad news is that those extreme changes actually happened over four years ago, when WhatsApp updated its privacy policy in 2016 to allow for significantly more data sharing and ad targeting with Facebook. What's clear from the reaction to this most recent change is that WhatsApp shares much more information with Facebook than many users were aware, and has been doing so since 2016. And that’s not users’ fault: WhatsApp’s obfuscation and misdirection around what its various policies allow has put its users in a losing battle to understand what, exactly, is happening to their data.

This new terms of service and the privacy policy are one more step in Facebook's long-standing effort to monetize its messaging properties, and are also in line with its plans to make WhatsApp, Facebook Messenger, and Instagram Direct less separate. This brings serious privacy and competition concerns, including but not limited to WhatsApp's ability to share new information with Facebook about users' interactions with new shopping and payment products.

To be clear: WhatsApp still uses strong end-to-end encryption, and there is no reason to doubt the security of the contents of your messages on WhatsApp. The issue here is other data about you, your messages, and your use of the app. We still offer guides for WhatsApp (for iOS and Android) in our Surveillance Self-Defense resources, as well as for Signal (for iOS and Android).

Then and Now

This story really starts in 2016, when WhatsApp changed its privacy policy for the first time since its 2014 acquisition to allow Facebook access to several kinds of WhatsApp user data, including phone numbers and usage metadata (e.g. information about how long and how often you use the app, as well as your operating system, IP address, mobile network, etc.). Then, as now, public statements about the policy highlighted how this sharing would help WhatsApp users communicate with businesses and receive more "relevant" ads on Facebook.

At the time, WhatsApp gave users a limited option to opt out of the change. Specifically, users had 30 days after first seeing the 2016 privacy policy notice to opt out of “shar[ing] my WhatsApp account information with Facebook to improve my Facebook ads and product experiences.” The emphasis is ours; it meant that WhatsApp users were able to opt out of seeing visible changes to Facebook ads or Facebook friend recommendations, but could not opt out of the data collection and sharing itself.

If you were a WhatsApp user in August 2016 and opted out within the 30-day grace period, that choice will still be in effect. You can check by going to the “Account” section of your settings and selecting “Request account info.” The more than one billion users who have joined since then, however, did not have the option to refuse this expanded sharing of their data, and have been subject to the 2016 policy this entire time.

Now, WhatsApp is changing the terms again. The new terms and privacy policy are mainly concerned with how businesses on WhatsApp can store and host their communications. This is happening as WhatsApp plans to roll out new commerce tools in the app like Facebook Shops. Taken together, this renders the borders between WhatsApp and Facebook (and Facebook-owned Instagram) even more permeable and ambiguous. Information about WhatsApp users’ interactions with Shops will be available to Facebook, and can be used to target the ads you see on Facebook and Instagram. On top of the WhatsApp user data Facebook already has access to, this is one more category of information that can now be shared and used for ad targeting. And there’s still no meaningful way to opt-out.

So when WhatsApp says that its data sharing practices and policies haven’t changed, it is correct—and that’s exactly the problem. Those practices and policies have represented an erosion of Facebook’s and WhatsApp’s original promises to keep the apps separate for over four years now, and these new products mean the scope of data that WhatsApp has access to, and can share with Facebook, is only expanding. 

All of this looks different for users in the EU, who are protected by the EU’s General Data Protection Regulation, or GDPR. The GDPR prevents WhatsApp from simply passing on user data to Facebook without the permission of its users. As user consent must be freely given, voluntary, and unambiguous, the all-or-nothing consent framework that appeared to many WhatsApp users last week is not allowed. Tying consent for a performance of a service—in this case, private communication on WhatsApp—to additional data processing by Facebook—like shopping, payments, and data sharing for targeted advertising—violates the “coupling prohibition” under the GDPR.

The Problems with Messenger Monetization

Facebook has been looking to monetize its messaging properties for years. WhatsApp’s 2016 privacy policy change paved the way for Facebook to make money off it, and its recent announcements and changes point to a monetization strategy focused on commercial transactions that span WhatsApp, Facebook, and Instagram.

Offering a hub of services on top of core messaging functionality is not new—LINE and especially WeChat are two long-standing examples of “everything apps”—but it is a problem for privacy and competition, especially given WhatsApp's pledge to remain a “standalone” product from Facebook. Even more dangerously, this kind of mission creep might give those who would like to undermine secure communications another pretense to limit, or demand access to, those technologies.

With three major social media and messaging properties in its “family of companies”—WhatsApp, Facebook Messenger, and Instagram Direct—Facebook is positioned to blur the lines between various services with anticompetitive, user-unfriendly tactics. When WhatsApp bundles new Facebook commerce services around the core messaging function, it bundles the terms users must agree to as well. The message this sends to users is clear: regardless of what services you choose to interact with (and even regardless of whether or when those services are rolled out in your geography), you have to agree to all of it or you’re out of luck. We’ve addressed similar user choice issues around Instagram’s recent update.

After these new shopping and payment features, it wouldn’t be unreasonable to expect WhatsApp to drift toward even more data sharing for advertising and targeting purposes. After all, monetizing a messenger isn’t just about making it easier for you to find businesses; it's also about making it easier for businesses to find you.

Facebook is no stranger to building and then exploiting user trust. Part of WhatsApp’s immense value to Facebook was, and still is, its reputation for industry-leading privacy and security. We hope that doesn’t change any further.  

EFF Welcomes Fourth Amendment Defender Jumana Musa to Advisory Board

EFF - Tue, 01/12/2021 - 4:55pm

Our Fourth Amendment rights are under attack in the digital age, and EFF is proud to announce that human rights attorney and racial justice activist Jumana Musa has joined our advisory board, bringing great expertise to our fight defending users’ privacy rights.

Musa is Director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers (NACDL), where she oversees initiatives to challenge Fourth Amendment violations and outdated legal doctrines that have allowed the government and law enforcement to rummage, with little oversight or restrictions, through people’s private digital files.

The Fourth Amendment Center provides assistance and training for defense attorneys handling cases involving surveillance technologies like geofencing, Stingrays that track people’s digital locations, facial recognition, and more.

In a recent episode of EFF’s How to Fix the Internet podcast, Musa said an important goal in achieving privacy protections for users is to build case law to remove the “third party doctrine.” This is the judge-created legal tenant that metadata—names of people you called or called you, websites you visited, or your location—held by third parties like Internet providers, phone companies, or email services, isn’t private and therefore isn’t protected by the Fourth Amendment. Police are increasingly using spying tools in criminal investigations to gather metadata in whole communities or during protests, Musa said, a practice that disproportionately affects black and indigenous people, and communities of color.

Prior to joining NACDL, Ms. Musa was a policy consultant for the Southern Border Communities Coalition, comprised of over 60 groups across the southwest organized to help immigrants facing brutality and abuse by border enforcement agencies and support a human immigration agenda.

Previously, as Deputy Director for the Rights Working Group, a national coalition of civil rights, civil liberties, human rights, and immigrant rights advocates, Musa coordinated the “Face the Truth” campaign against racial profiling. She was also the Advocacy Director for Domestic Human Rights and International Justice at Amnesty International USA, where she addressed the domestic and international impact of U.S. counterterrorism efforts on human rights. She was one of the first human rights attorneys allowed to travel to the naval base at Guantanamo Bay, Cuba, and served as Amnesty International's legal observer at military commission proceedings on the base. 

Welcome to EFF, Jumana!

Face Surveillance and the Capitol Attack

EFF - Tue, 01/12/2021 - 3:11pm

After last week’s violent attack on the Capitol, law enforcement is working overtime to identify the perpetrators. This is critical to accountability for the attempted insurrection. Law enforcement has many, many tools at their disposal to do this, especially given the very public nature of most of the organizing. But we object to one method reportedly being used to determine who was involved: law enforcement using facial recognition technologies to compare photos of unidentified individuals from the Capitol attack to databases of photos of known individuals. There are just too many risks and problems in this approach, both technically and legally, to justify its use. 

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy.

EFF Opposes Government Use of Face Recognition

Make no mistake: the attack on the Capitol can and should be investigated by law enforcement. The attackers’ use of public social media to both prepare and document their actions will make the job easier than it otherwise might be.  

But a ban on all government use of face recognition, including its use by law enforcement, remains a necessary precaution to protect us from this dangerous and easily misused technology. This includes a ban on government’s use of information obtained by other government actors and by third-party services through face recognition.

One such service is Clearview AI, which allows law enforcement officers to upload a photo of an unidentified person and, allegedly, get back publicly-posted photos of that person. Clearview has reportedly seen a huge increase in usage since the attack. Yet the faceprints in Clearview’s database were collected, without consent, from millions of unsuspecting users across the web, from places like Facebook, YouTube, and Venmo, along with links to where those photos were posted on the Internet. This means that police are comparing images of the rioters to those of many millions of individuals who were never involved—probably including yours. 

EFF opposes law enforcement use of Clearview, and has filed an amicus brief against it in a suit brought by the ACLU. The suit correctly alleges the company’s faceprinting without consent violates the Illinois Biometric Information Privacy Act (BIPA). 

Separately, police tracking down the Capitol attackers are likely using government-controlled databases, such as those maintained by state DMVs, for face recognition purposes. We also oppose this use of face recognition technology, which matches images collected during nearly universal practices like applying for a driver’s license. Most individuals require government-issued identification or a license but have no ability to opt out of such face surveillance. 

Face Recognition Impacts Everyone, Not Only Those Charged With Crimes 

The number of people affected by government use of face recognition is staggering: from DMV databases alone, roughly two-thirds of the population of the U.S. is at risk of image surveillance and misidentification, with no choice to opt out. Further, Clearview has extracted faceprints from over 3 billion people. This is not a question of “what happens if face recognition is used against you?” It is a question of how many times law enforcement has already done so. 

For many of the same reasons, EFF also opposes government identification of those at the Capitol by means of dragnet searches of cell phone records of everyone present. Such searches have many problems, from the fact that users are often not actually where records indicate they are, to this tactic’s history of falsely implicating innocent people. The Fourth Amendment was written specifically to prevent these kinds of overbroad searches.

Government Use of Facial Recognition Would Chill Protected Protest Activity

Facial surveillance technology allows police to track people not only after the fact but also in real time, including at lawful political protests. Police repeatedly used this same technology to arrest people who participated in last year’s Black Lives Matter protests. Its normalization and widespread use by the government would fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment-protected rights to speak, peacefully assemble, and associate with others. 

Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden historically falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.

Face surveillance technology is also prone to error and has already implicated multiple people for crimes they did not commit

Government use of facial recognition crosses a bright red line, and we should not normalize its use, even during a national tragedy. In responding to this unprecedented event, we must thoughtfully consider not just the unexpected ramifications that any new legislation could have, but the hazards posed by surveillance techniques like facial recognition. This technology poses a profound threat to personal privacy, racial justice, political and religious expression, and the fundamental freedom to go about our lives without having our movements and associations covertly monitored and analyzed.

 

Beyond Platforms: Private Censorship, Parler, and the Stack

EFF - Mon, 01/11/2021 - 7:00pm

Last week, following riots that saw supporters of President Trump breach and sack parts of the Capitol building, Facebook and Twitter made the decision to give the president the boot. That was notable enough, given that both companies had previously treated the president, like other political leaders, as largely exempt from content moderation rules. Many of the president’s followers responded by moving to Parler. This week, the response has taken a new turn. Infrastructure companies much closer to the bottom of the technical “stack” including Amazon Web Services (AWS), and Google’s Android and Apple’s iOS app storesdeciding to cut off service not just to an individual but to an entire platform.  Parler has so far struggled to return online, partly through errors of its own making, but also because the lower down the technical stack, the harder it is to find alternatives, or re-implement what capabilities the Internet has taken for granted.

Whatever you think of Parler, these decisions should give you pause. Private companies have strong legal rights under U.S. law to refuse to host or support speech they don’t like. But that refusal carries different risks when a group of companies comes together to ensure that certain speech or speakers are effectively taken offline altogether.

The Free Speech Stack—aka “Free Speech Chokepoints”

To see the implications of censorship choices by deeper stack companies, let’s back up for a minute. As researcher Joan Donovan puts it,“At every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how.” And the decisions made by companies at varying layers of the stack are bound to have different impacts on free expression.

At the top of the stack are services like Facebook, Reddit, or Twitter, platforms whose decisions about who to serve (or what to allow) are comparatively visible, though still far too opaque to most users.  Their responses can be comparatively targeted to specific users and content and, most importantly, do not cut off as many alternatives. For instance, a discussion forum lies close to the top of the stack: if you are booted from such a platform, there are other venues in which you can exercise your speech. These are the sites and services that all users (both content creators and content consumers) interact with most directly. They are also the places where people think of when they think of the content (i.e.“I saw it on Facebook”). Users are often required to have individual accounts or advantaged if they do. Users may also specifically seek out the sites for their content. The closer to the user end, the more likely it is that sites will have more developed and apparent curatorial and editorial policies and practicestheir "signature styles." And users typically have an avenue, flawed as it may be, to communicate directly with the service. 

At the other end of the stack are internet service providers (ISPs), like Comcast or AT&T. Decisions made by companies at this layer of the stack to remove content or users raise greater concerns for free expression, especially when there are few if any competitors. For example, it would be very concerning if the only broadband provider in your area cut you off because they didn’t like what you said onlineor what someone else whose name is on the account said. The adage “if you don’t like the rules, go elsewhere” doesn’t work when there is nowhere else to go.

In between are a wide array of intermediaries, such as upstream hosts like AWS, domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services. EFF has a handy chart of some of those key links between speakers and their audience here. These intermediaries provide the infrastructure for speech and commerce, but many have only the most tangential relationship to their users. Faced with a complaint, takedown will be much easier and cheaper than a nuanced analysis of a given user’s speech, much less the speech that might be hosted by a company that is a user of their services. So these service are more likely to simply cut a user or platform off than do a deeper review. Moreover, in many cases both speakers and audiences will not be aware of the identities of these services and, even if they do, have no independent relationship with them. These services are thus not commonly associated with the speech that passes through them and have no "signature style" to enforce.

Infrastructure Takedowns Are Equally If Not More Likely to Silence Marginalized Voices

We saw a particularly egregious example of an infrastructure takedown just a few months ago, when Zoom made the decision to block a San Francisco State University online academic event featuring prominent activists from Black and South African liberation movements, the advocacy group Jewish Voice for Peace, and controversial figure Leila Khaled—inspiring Facebook and YouTube to follow suit. The decision, which Zoom justified on the basis of Khaled’s alleged ties to a U.S.-designated foreign terrorist organization, was apparently made following external pressure.

Although we have numerous concerns with the manner in which social media platforms like Facebook, YouTube, and Twitter make decisions about speech, we viewed Zoom’s decision differently. Companies like Facebook and YouTube, for good or ill, include content moderation as part of the service they provide. Since the beginning of the pandemic in particular, however, Zoom has been used around the world more like a phone company than a platform. And just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

Just as you don’t expect your phone company to start making decisions about who you can call, you don’t expect your conferencing service to start making decisions about who can join your meeting.

It is precisely this reason that Amazon’s ad-hoc decision to cut off hosting to social media alternative Parler, in the face of public pressure, should be of concern to anyone worried about how decisions about speech are made in the long run. In some ways, the ejection of Parler is neither a novel, nor a surprising development. Firstly, it is by no means the first instance of moderation at this level of the stack. Prior examples include Amazon denying service to WikiLeaks and the entire nation of Iran. Secondly, the domestic pressure on companies like Amazon to disentangle themselves from Parler was intense, and for good reason. After all, in the days leading up to its removal by Amazon, Parler played host to outrageously violent threats against elected politicians from its verified users, including lawyer L. Lin Wood.

But infrastructure takedowns nonetheless represent a significant departure from the expectations of most users. First, they are cumulative, since all speech on the Internet relies upon multiple infrastructure hosts.  If users have to worry about satisfying not only their host’s terms and conditions but also those of every service in the chain from speaker to audience—even though the actual speaker may not even be aware of all of those services or where they draw the line between hateful and non-hateful speech—many users will simply avoid sharing controversial opinions altogether. They are also less precise. In the past, we’ve seen entire large websites darkened by upstream hosts because of a complaint about a single document posted. More broadly, infrastructure level takedowns move us further toward a thoroughly locked-down, highly monitored web, from which a speaker can be effectively ejected at any time.

Going forward, we are likely to see more cases that look like Zoom’s censorship of an academic panel than we are Amazon cutting off another Parler. Nevertheless, Amazon’s decision highlights core questions of our time: Who should decide what is acceptable speech, and to what degree should companies at the infrastructure layer play a role in censorship?

At EFF, we think the answer is both simple and challenging: wherever possible, users should decide for themselves, and companies at the infrastructure layer should stay well out of it. The firmest, most consistent, approach infrastructure chokepoints can take is to simply refuse to be chokepoints at all. They should act to defend their role as a conduit, rather than a publisher. Just as law and custom developed a norm that we might sue a publisher for defamation, but not the owner of the building the publisher occupies, we are slowly developing norms about responsibility for content online. Companies like Zoom and Amazon have an opportunity to shape those norms—for the better or for the worse.

Internet Policy and Practice Should Be User-Driven, Not Crisis-Driven

It’s easy to say today, in a moment of crisis, that a service like Parler should be shunned. After all, people are using it to organize attacks on the U.S. Capitol and on Congressional leaders, with an expressed goal to undermine the democratic process. But when the crisis has passed, pressure on basic infrastructure, as a tactic, will be re-used, inevitably, against unjustly marginalized speakers and forums. This is not a slippery slope, nor a tentative prediction—we have already seen this happen to groups and communities that have far less power and resources than the President of the United States and the backers of his cause. And this facility for broad censorship will not be lost on foreign governments who wish to silence legitimate dissent either. Now that the world has been reminded that infrastructure can be commandeered to make decisions to control speech, calls for it will increase: and principled objections may fall to the wayside. 

Over the coming weeks, we can expect to see more decisions like these from companies at all layers of the stack. Just today, Facebook removed members of the Ugandan government in advance of Tuesday’s elections in the country, out of concerns for election manipulation. Some of the decisions that these companies make may be well-researched, while others will undoubtedly come as the result of external pressure and at the expense of marginalized groups.

The core problem remains: regardless of whether we agree with an individual decision, these decisions overall have not and will not be made democratically and in line with the requirements of transparency and due process, and instead are made by a handful of individuals, in a handful of companies, most distanced and least visible to the most Internet users. Whether you agree with those decisions or not, you will not be a part of them, nor be privy to their considerations. And unless we dismantle the increasingly centralized chokepoints in our global digital infrastructure, we can anticipate an escalating political battle between political factions and nation states to seize control of their powers.


The FCC and States Must Ban Digital Redlining

EFF - Mon, 01/11/2021 - 6:22pm

The rollout of fiber broadband will never make it to many communities in the US. That’s because large, national ISPs are currently laying fiber primarily focused on high-income users to the detriment of the rest of their users. The absence of regulators has created a situation where wealthy end users are getting fiber, but predominantly low-income users are not being transitioned off legacy infrastructure. The result being “digital redlining” of broadband, where wealthy broadband users are getting the benefits of cheaper and faster Internet access through fiber, and low-income broadband users are being left behind with more expensive slow access by that same carrier. We have seen this type of economic discrimination in the past in other venues such as housing, and it is happening now with 21st-century broadband access. 

It doesn’t have to be this way. Federal, state, and local governments have a clear role in promoting anti-discrimination deployment and historically have enforced rules to prevent unjust discrimination. States and local governments have power through franchise authority to prohibit unjust discrimination through build-out requirements. In fact, it is already illegal in California to discriminate based on income status, as EFF noted in its comments to the state’s regulator. And cities that hold direct authority over ISPs can require non-discrimination like New York City just did when it required Verizon to deploy 500,000 more fiber connections last year to low-income users.

That’s why dozens of organizations have asked the incoming Biden FCC to directly confront digital redlining after the FCC reverses the Trump era deregulation of broadband providers and restore their common carriage obligations. For the last three years, the FCC had abandoned its authority to address these systemic inequalities causing it to sit out the pandemic at a time when dependence on broadband is sky-high. It is time to treat broadband as important as water and electricity and ensure that as a matter of law everyone gets the access they deserve.

What the Data Is Showing Us on Fiber in Cities

A great number of people in cities that can be served fiber in a commercially feasible manner (that is, you can build it and make a profit without government subsidies) are still on copper DSL networks. Studies of major metropolitan areas such as Oakland and Los Angeles County are showing systemic discrimination against low-income users in fiber deployment despite high population density, and because income can often serve as a proxy for race, this falls particularly hard on neighborhoods of color.

Other studies conducted by the Communications Workers of America and National Digital Inclusion Alliance have found that this digital redlining is systemic across AT&T’s footprint with only 1/3 of AT&T wireline customers connected to its fiber. In fact, not only is AT&T not deploying fiber to all of their customers over time; it is now in the process of preparing to disconnect its copper DSL customers and leaving them no other choice than an unreliable mobile connection.

There are no good reasons for this discrimination to continue. For example, Oakland has an estimated 7000+ people per square mile, which is far above sufficient density to finance at least one city-wide fiber network. Tightly packed populations are ideal for broadband providers because they have to invest less in infrastructure to reach a large number of paying customers: 7,000 users per square mile is far more than a provider needs to pay for fiber. Chattanooga, which currently has its fiber deployed by the local government, has a population density of only 1222 people per square mile. Since the Chattanooga government ISP publicly reports its finances in great detail, we can see its extremely rosy numbers (chart below) with a fraction of the density of Oakland and many other underserved cities. In fact, rural cooperatives are doing gigabit fiber at 2.4 people per square mile

EFF assembled this chart based on publicly reported data by EPB available at the following link (https://epb.com/about-epb/leadership-annual-reports)

In other words, there are no good reasons for this discrimination. If governments require carriers to deploy fiber in a non-discriminatory way, they will still make a profit. The question really boils down to whether we are going to allow incrementally higher profits from discrimination to continue despite historically enforcing laws against such practices.

There Are Concrete Ramifications for Broadband Affordability If We Do Not Resolve Digital Redlining of Fiber in Major Cities

The pandemic has shown us that broadband is not equally accessible at affordable prices even in our major cities. It wasn’t a rural part of America that had those little girls do their homework in a fast-food parking lot (picture below) that caught media attention, it was in Salinas, California, with a population density of 6,490 people per square mile.

Photo was taken at a Taco Bell in Salinas, California (https://www.ktvu.com/news/photo-of-girls-using-taco-bell-wifi-becomes-symbol-of-digital-divide)

 

Those kids probably had some basic Internet access, but it was likely too expensive and too slow to handle remote education. And that should come as no surprise given that a recent comprehensive study by the Open Technology Institute has found that the United States has on average the most expensive slowest Internet among modern economies

The lack of ubiquitous fiber infrastructure limits the government’s ability to support efforts to deliver access to those of limited income. When you don’t have fiber in those neighborhoods, all you can do is what the city of Salinas did, which was pay for an expensive, slow mobile hotspot. Meanwhile, Chattanooga is able to give 100/100 mbps broadband access to all of its low-income families for free for 10 years for around $8 million. Since the fiber is already built and connected throughout the city, and because it is very cheap to add people to the fiber network once built, it only cost the city an average of $2-$3 per month per child to give 28,000 kids free fast Internet at cost (that is, without making a profit). If we want to make free fast Internet a reality, we need the infrastructure that can keep costs sufficiently low to realistically deliver. 

The massive discrepancy between fiber and non-fiber is due to the fact that the older networks are getting more expensive to run and can’t cheaply add a bunch of new users for higher speed needs. No amount of subsidy will change the physical limitations of those networks on top of the fact that they are getting more expensive to maintain due to their obsolescence. 

The future of all things wireless and wireline in broadband is running through fiber infrastructure. It is a universal medium that is unifying the 21st century Internet because it has the ability to scale up capacity far ahead of expected growth in demand in a cost-effective way. It has orders of magnitude greater potential and capacity than any other wireline or wireless medium that transmits data. Our technical analysis concluded that the 21st century Internet is one where all Americans are connected to fiber and are actively supporting efforts in DC to pass a universal fiber plan and well as efforts in states like California. But for major cities, the lack of ubiquitous fiber is not due to the lack of government spending, it is from the lack of regulatory enforcement of non-discrimination.

 

The Government Has All of the Powers it Needs to Find and Prosecute Those Responsible for the Crimes on Capitol Hill this Week.

EFF - Mon, 01/11/2021 - 11:54am

Perpetrators of the horrific events that took place at the Capitol on January 6 had a clear goal: to undermine the legitimate operations of government, to disrupt the peaceful transition of power, and to intimidate, hurt, and possibly kill those political leaders that disagree with their worldview. 

These are all crimes that can and and should be investigated by law enforcement. Yet history provides the clear lesson that immediate legislative responses to an unprecedented national crime or a deeply traumatic incident can have profound, unforeseen, and often unconstitutional consequences for decades to come. Innocent people—international travelers, immigrants, asylum seekers, activists, journalists, attorneys, and everyday Internet users—have spent the last two decades contending with the loss of privacy, government harassment, and exaggerated sentencing that came along with the PATRIOT Act and other laws passed in the wake of national tragedies.   

Law enforcement does not need additional powers, new laws, harsher sentencing mandates, or looser restrictions on the use of surveillance measures to investigate and prosecute those responsible for the dangerous crimes that occurred. Moreover, we know from experience that any such new powers will inevitably be used to target the most vulnerable members of society or be used indiscriminately to surveil the broader public. 

EFF has spent the last three decades pushing back against overbroad government powers—in courts, in Congress, and in state legislatures—to demand an end to unconstitutional surveillance and to warn against the dangers of technology being abused to invade people’s rights. To take just a few present examples: we continue to fight against the NSA’s mass surveillance programs, the exponential increase of surveillance power used by immigration enforcement agencies at the U.S. border and in the interior, and the inevitable creep of military surveillance into everyday law enforcement to this day. The fact that we are still fighting these battles shows just how hard it is to end unconstitutional overreactions.

Policymakers must learn from those mistakes. 

First, Congress and state lawmakers should not even consider any new laws or deploying any new surveillance technology without first explaining why law enforcement’s vast, existing powers are insufficient. This is particularly true given that it appears that the perpetrators of the January 6 violence planned, organized, and executed their acts in the open, and that much of the evidence of their crimes is publicly available.

Second, lawmakers must understand that any new laws or technology will likely be turned on vulnerable communities as well as those who vocally and peacefully call for social change. Elected leaders should not exacerbate ongoing injustice via new laws or surveillance technology. Instead, they must work to correct the government’s past abuse of powers to target dissenting and marginalized voices.   

As Representative Barbara Lee said in 2001, “Our country is in a state of mourning. Some of us must say, let’s step back for a moment . . . and think through the implications of our actions today so that this does not spiral out of control.” Surveillance, policing, and exaggerated sentencing—no matter their intention—always end up being wielded against the most vulnerable members of society. We urge President-elect Joe Biden to pause and to not let this moment contribute to that already startling inequity in our society. 

YouTube and TikTok Put Human Rights In Jeopardy in Turkey

EFF - Sat, 01/09/2021 - 5:25pm

Democracy in Turkey is in a deep crisis. Its ruling party, led by Recep Tayyip Erdoğan, systematically silences marginalized voices, shuts down dissident TV channels, sentences journalists, and disregards the European Court of Human Rights decisions. As we wrote in November, in this oppressive atmosphere, Turkey’s new Social Media Law has doubled down on previous online censorship measures by requiring sites to appoint a local representative who can be served with content removal demands and data localization mandates. This company representative would also be responsible for maintaining the fast response turnaround times to government requests required by the law.

The pushback against the requirements of the Social Media Law was initially strong. Facebook, Instagram, Twitter, Periscope, YouTube, and TikTok had not appointed representatives when the law was first introduced late last year. But now, two powerful platforms have capitulated, despite the law’s explicit threat to users’ fundamental rights: The first one, YouTube, announced on December 16th, followed by TikTok last Friday and DailyMotion just today, January 9th. This decision creates a bad precedent that will make it harder for other companies to fight back.

Both companies now plan to set up a “legal entity” in Turkey, providing a local government point of contact. Even though both announcements promise that the platforms will not change their content review or data handling or holding practices, it is not clear how YouTube or TikTok will challenge or stand against the Turkish government once they agree to set up legal shops on Turkish soil. The move by YouTube (and the lack of transparency around its decision) is particularly disappointing given the importance of the platform for political speech and over a decade of attempts to control YouTube content by the Turkish government. 

The Turkish administration and courts have long attempted to punish sites like YouTube and Twitter who do not comply with its takedown orders to their satisfaction. With a local legal presence, government officials can not only throttle or block sites; they could force platforms to arbitrarily remove perfect legal, political speech or disclose political activists’ data or force them to be complicit in a government-sanctioned human rights violation. Arbitrary political arrests and detentions are increasingly common inside the country, from information security professionals to journalists, doctors and lawyers. A local employee of an Internet company in such a hostile environment could, quite literally, be a hostage to government interests.

Reacting to TikTok Friday’s news, Yaman Akdeniz, one of the founders of the Turkish Freedom of Expression Association, told EFF: 

“TikTok is completely misguided about Turkey Internet-related restrictions and government demands. The company will become part of the problem and can become complicit in rights violations in Turkey.”

Chilling Effects on Freedom of Expression 

Turkey’s government has been working to create ways to control foreign Internet sites and services for many years. Under the new Social Media Law, failure to appoint a representative leads to stiff fines, an advertisement ban, and throttling of the provider’s bandwidth. According to the law, the Turkish Information and Communication Technologies Authority (Bilgi Teknolojileri ve İletişim Kurumu or BTK) can issue a five-phase set of fines. BTK has already sanctioned social media platforms that did not appoint local representatives by imposing two initial sets of fines, on November 4 and December 11, of TRY10 million ($1.3 million) and TRY30 million ($4 million) respectively. Facing these fines, YouTube, and now TikTok, blinked. 

If platforms do not appoint a representative by January 17, 2021, BTK can prohibit Turkish taxpayers from placing ads on and making payments to a provider’s platform if they do not have a Turkey-based representative. If the provider continues to refuse to appoint a representative until April 2021, the BTK can apply to a Criminal Judgeship of Peace to throttle the provider’s bandwidth initially by 50%. If, after that, the provider hasn’t still appointed a representative until May 2021, BTK can apply for a further bandwidth reduction; this time, the judgeship can decide to throttle the provider’s bandwidth anywhere between 50%-90%. 

The Turkish government should refrain from imposing disproportionate sanctions on platforms given their significant chilling effect on freedom of expression. Moreover, throttling, which means that locals in Turkey do not have access to social media sites, is effectively a technical ban to access such sites and services—an inherently disproportionate measure. 

Human Rights Groups Fight Back

EFF stands with Turkish Freedom of Expression Association (Tr. İfade Özgürlüğü Derneği), Human Rights Watch, and Article 19 in their protest against YouTube. In a joint letter, they urge YouTube to reverse its decision and stand firm against the Turkish governments’ pressure. The letter urgently asks YouTube to clarify how the company intends to respect the rights to freedom of expression and privacy of their users in Turkey; and if they can publish the company’s Human Rights Impact Assessment that led to the decision to appoint a representative office, which can be served with content take-down notifications. 

YouTube, a Google subsidiary, has a corporate responsibility to uphold freedom of expression as guided by the UN Guiding Principles on Business and Human Rights, a global standard of “expected conduct for all business enterprises wherever they operate.” The Principles exist independently of States’ willingness to fulfill their human rights obligations and do not diminish such commitments. And it exists over and above compliance with national laws and regulations protecting human rights.

YouTube’s Newly Precarious Position

According to YouTube’s Community Guidelines, legal content is not removed unless it violates the site rules. Moreover, content is removed within a country only if it violates the laws of that country, as determined by YouTube’s lawyers. The Transparency Report for Turkey shows that Google did not take any action concerning government takedown requests for 46.6% of Turkey’s cases.

Those declined orders demonstrate how overbroad and politicized Turkey’s takedown process has become, and how important YouTube’s freedom to challenge such orders has been. In one of the requests, the Turkish government requested a takedown of videos where officials attack Syrian refugees who try to cross the Turkey-Greece border. YouTube only removed 1 out of 34 videos because only one video violated its community rules. In another instance, YouTube received a BTK request, and then a court order, to remove 84 videos that criticized high-level government officials. YouTube blocked access to seven videos, 16 videos were erased by users, and 61 videos remained on site. Another example shows that YouTube did not take down 242 videos allegedly related to an individual affiliated with law enforcement. 

With a local representative in place, YouTube will find it much harder to resist such orders, nor will it respect its responsibilities as part of its membership of the Global Network Initiative and under international human rights law.

Social media companies must uphold international human rights law when it conflicts with local laws. The UN Special Rapporteur on free expression has called upon companies to recognize human rights law as the authoritative global standard for freedom of expression on their platforms, not domestic laws. Likewise, the UN Guiding Principles on Business and Human Rights provide that companies respect human rights and avoid contributing to human rights violations. This becomes especially important in countries where democracy is most fragile. The Global Network Initiative’s implementation guidelines were written to cover cases where its corporate members operate in countries where local law stands in conflict with human rights. YouTube has given no public indication of how it would seek to match its GNI commitments given its changed relation with Turkey.

Tech companies have come under increasing criticism for decisions to flout and ignore local laws or treat non-U.S. countries with attitudes that lack understanding of the local context. In many cases, local compliance and representation by powerful multinational companies can be a positive step. 

But Turkey is not one of those cases. Its ruling party has undermined democratic pluralism, an independent judiciary, and separation of powers in recent years. Lack of checks and balances creates an oppressive atmosphere and results in the total absence of due process. Compliance with local Turkish law can potentially mean becoming the arm of an increasingly totalitarian State and complicity with its human rights violations. 

Arbitrary Blocking and Content Removal

EngelliWeb (Eng. BlockedWeb), a comprehensive project initiated by the Turkish Freedom of Expression Association, aims at keeping statistical records of censored content and reporting about it. In one instance, EngelliWeb reported that one of their stories (reporting the blocking of another site) became subject to a court order requesting the blocking and content removal of such news. The Association has recently announced they would object to the court decision. Another access blocking decision is striking because the same judge who decided in favor of the defendant in a defamation lawsuit ordered blocking access to the news story related to the same lawsuit on the grounds of “violation of personal rights.” These examples demonstrate there is no justified reason, much less a legal one, to censor such news in Turkey. 

According to Yaman Akdeniz, an academic and one of the founders of the Turkish Freedom of Expression Association:

“Turkish judges issue approximately 12,000 blocking and removal decisions each year, and over 450,000 websites and 140,000 URL are currently blocked from Turkey according to our EngelliWeb research. In YouTube’s case, access to over 10,000 YouTube videos is currently blocked from Turkey. In the absence of due process and independent judiciary, almost all appeals involving such decisions are rejected by same level judges without proper legal scrutiny. In the absence of due process, YouTube and any other social media platform provider willing to come to Turkey, risk becoming the long arm of the Turkish judiciary. 

...the Constitutional Court has become part of the problems associated with the judiciary and does not swiftly decide individual applications involving Internet-related blocking and removal decisions. Even when the Constitutional Court finds a violation as in the cases of Wikipedia, Sendika.Org, and others, the lower courts constantly ignore the decisions of the Constitutional Court which then diminish substantially the impact of such decisions.”

Social media companies should not give in to this pressure. If social media companies comply with the law, then the Turkish authoritarian government wins without a fight. YouTube and TikTok should not lead the retreat.

California City’s Effort to Punish Journalists For Publishing Documents Widely Available Online is Dangerous and Chilling, EFF Brief Argues

EFF - Fri, 01/08/2021 - 5:49pm

As part of their jobs, journalists routinely dig through government websites to find newsworthy documents and share them with the broader public. Journalists and Internet users understand that publicly available information on government websites is not secret and that, if government officials want to protect information from being disclosed online, they shouldn’t publicly post it on the Internet.

But  a California city is ignoring these norms and trying to punish several journalists for doing their jobs. The city of Fullerton claims that the journalists, who write for a digital publication called Friends for Fullerton’s Future, violated federal and state computer crime laws by accessing documents publicly available to any Internet user. Not only is the civil suit by the city a transparent attempt to cover up its own poor Internet security practices, it also threatens to chill valuable and important journalism. That’s why EFF, along with the ACLU and ACLU of Southern California, filed a friend-of-the-court brief in a California appellate court this week in support of the journalists.

The city sued two journalists and Friends for Fullerton’s Future based on several claims, including an allegation that they violated California’s Comprehensive Computer Data and Fraud Act when they obtained and published documents officials posted to a city file-sharing website that was available to anyone with an Internet connection. For months, the city made the file-sharing site available to the public without a password or any other access restrictions and used it to conduct city business, including providing records to members of the public who requested them under the California Public Records Act.

Even though they took no steps to limit public access to the city’s file sharing site, officials nonetheless objected when the journalists published publicly available documents that officials believed should not have been public or the subject of news stories. And instead of taking steps to ensure the public did not have access to sensitive government documents, the city is trying to stretch the California computer crime law, known as Section 502, to punish the journalists. 

EFF’s amicus brief argues that the city’s interpretation of California’s Section 502, which was intended to criminalize malicious computer intrusions and is similar to the federal Computer Fraud and Abuse Act, is wrong as a legal matter and that it threatens to chill the public’s constitutionally protected right to publish information about government affairs.

The City contends that journalists act “without permission,” and thus commit a crime under Section 502, by accessing a particular City controlled URL and downloading documents stored there—notwithstanding the fact that the URL is in regular use in City business and has been disseminated to the general public. The City claims that an individual may access a publicly available URL, and download documents stored in a publicly accessible account, only if the City specifically provides that URL in an email addressed to that particular person. But that interpretation of “permission” produces absurd—and dangerous—results: the City could choose arbitrarily to make a criminal of many visitors to its website, simply by claiming that it had not provided the requisite permission-email to the Visitor.

The city’s interpretation of Section 502 also directly conflicts with “the longstanding open-access norms of the Internet,” the brief argues. Because Internet users understand that they have permission to access information posted publicly on the Internet, the city must take affirmative steps to restrict access via technical barriers before it can claim a Section 502 violation.

The city’s broad interpretation of Section 502 is also dangerous because, if accepted, it would threaten a great deal of valuable journalism protected by the First Amendment.

The City’s interpretation would permit public officials to decide—after making records publicly available online (through their own fault or otherwise)—that accessing those records was illegal. Under the City’s theory, it can retroactively revoke generalized permission to access publicly available documents as to a single individual or group of users once it changes its mind or is simply embarrassed by the documents’ publication. The City could then leverage that revocation of permission into a violation of Section 502 and pursue both civil and criminal liability against the parties who accessed the materials.

Moreover, the “City’s broad reading of Section 502 would chill socially valuable research, journalism, and online security and anti-discrimination testing—activity squarely protected by the First Amendment,” the brief argues. The city’s interpretation of Section 502 would jeopardize important investigative reporting techniques that in the past have uncovered illegal employment and housing discrimination.

Finally, EFF’s brief argues that the city’s interpretation of Section 502 violates the U.S. Constitution’s due process protections because it would fail to give Internet users adequate notice that they were committing a crime while simultaneously giving government officials vast discretion to decide when to enforce the law against Internet users. 

The City proposes that journalists perusing a website used to disclose public records must guess whether particular documents are intended for them or not, intuit the City’s intentions in posting those documents, and then politely look the other way—or be criminally liable. This scheme results in unclear, subjective, and after-the-fact determinations based on the whims of public officials. Effectively, the public would have to engage in mind reading to know whether officials approve of their access or subsequent use of the documents from the City’s website.

The court should reject the city’s arguments and ensure that Section 502 is not abused to retaliate against journalists, particularly because the city is seeking to punish these reporters for its own computer security shortcomings. Publishing government records available to every Internet user is good journalism, not a crime, and using computer crime laws to punish journalists for obtaining documents available to every Internet user is dangerous—and unconstitutional.

ACLU, EFF, and Tarver Law Offices Urge Supreme Court to Protect Against Forced Disclosure of Phone Passwords to Law Enforcement

EFF - Fri, 01/08/2021 - 2:41pm
Does the Fifth Amendment Protect You from Revealing Your Passwords to Police?

Washington, D.C. - The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF), along with New Jersey-based Tarver Law Offices, are urging the U.S. Supreme Court to ensure the Fifth Amendment protection against self-incrimination extends to the digital age by prohibiting law enforcement from forcing individuals to disclose their phone and computer passcodes.

“The Fifth Amendment protects us from being forced to give police a combination to a wall safe. That same protection should extend to our phone and computer passwords, which can give access to far more sensitive information than any wall safe could,” said Jennifer Granick, ACLU surveillance and cybersecurity counsel. “The Supreme Court should take this case to ensure our constitutional rights survive in the digital age.”

In a petition filed Thursday and first reported by The Wall Street Journal, the ACLU and EFF are asking the U.S. Supreme Court to hear Andrews v. New Jersey. In this case, a prosecutor obtained a court order requiring Mr. Robert Andrews to disclose passwords to two cell phones. Mr. Andrews fought the order, citing his Fifth Amendment privilege. Ultimately, the New Jersey State Supreme Court held that the privilege did not apply to the disclosure or use of the passwords.

“There are few things in constitutional law more sacred than the Fifth Amendment privilege against self-incrimination,” said Mr. Andrews’ attorney, Robert L. Tarver, Jr. “Up to now, our thoughts and the content of our minds have been protected from government intrusion. The recent decision of the New Jersey Supreme Court highlights the need for the Supreme Court to solidify those protections.”

The U.S. Supreme Court has long held, consistent with the Fifth Amendment, that the government cannot compel a person to respond to a question when the answer could be incriminating. Lower courts, however, have disagreed on the scope of the right to remain silent when the government demands that a person disclose or enter phone and computer passwords. This confusing patchwork of rulings has resulted in Fifth Amendment rights depending on where one lives, and in some cases, whether state or federal authorities are the ones demanding the password.

“The Constitution is clear: no one ‘shall be compelled in any criminal case to be a witness against himself,’” said EFF Senior Staff Attorney Andrew Crocker. “When law enforcement requires you to reveal your passcodes, they force you to be a witness in your own criminal prosecution. The Supreme Court should take this case to settle this critical question about digital privacy and self-incrimination.”

For the full petition:
https://www.eff.org/document/petition-writ-certiorari-andrews-v-new-jersey

Contact:  AndrewCrockerSenior Staff Attorneyandrew@eff.org MarkRumoldSenior Staff Attorneymark@eff.org

EFF's Response to Social Media Companies' Decisions to Block President Trump’s Accounts

EFF - Thu, 01/07/2021 - 7:42pm

Like most people in the United States and around the world, EFF is shocked and disgusted by Wednesday’s violent attack on the U.S. Capitol. We support all those who are working to defend the Constitution and the rule of law, and we are grateful for the service of policymakers, staffers, and other workers who endured many hours of lockdown and reconvened to fulfill their constitutional duties. 

The decisions by Twitter, Facebook, Instagram, Snapchat, and others to suspend and/or block President Trump’s communications via their platforms is a simple exercise of their rights, under the First Amendment and Section 230, to curate their sites. We support those rights. Nevertheless, we are always concerned when platforms take on the role of censors, which is why we continue to call on them to apply a human rights framework to those decisions. We also note that those same platforms have chosen, for years, to privilege some speakers—particularly governmental officials—over others, not just in the U.S., but in other countries as well. A platform should not apply one set of rules to most of its users, and then apply a more permissive set of rules to politicians and world leaders who are already immensely powerful. Instead, they should be precisely as judicious about removing the content of ordinary users as they have been to date regarding heads of state. Going forward, we call once again on the platforms to be more transparent and consistent in how they apply their rules—and we call on policymakers to find ways to foster competition so that users have numerous editorial options and policies from which to choose. 

Pages