Electronic Freedom Foundation

The Online Content Policy Modernization Act Is an Unconstitutional Mess

EFF - 7 min 30 sec ago

EFF is standing with a huge coalition of organizations to urge Congress to oppose the Online Content Policy Modernization Act (OCPMA, S. 4632). Introduced by Sen. Lindsey Graham (R-SC), the OCPMA is yet another of this year’s flood of misguided attacks on Internet speech (read bill [pdf]). The bill would make it harder for online platforms to take common-sense moderation measures like removing spam or correcting disinformation, including disinformation about the upcoming election. But it doesn’t stop there: the bill would also upend longstanding balances in copyright law, subjecting ordinary Internet users to up to $30,000 in fines for everyday activities like sharing photos and writing online, without even the benefit of a judge and jury.

The OCPMA combines two previous bills. The first—the Online Freedom and Viewpoint Diversity Act (S. 4534)—undermines Section 230, the most important law protecting free speech online. Section 230 enshrines the common-sense principle that if you say something unlawful online, you should be the one held responsible, not the website or platform where you said it. Section 230 also makes it clear that platforms have liability protections for the decisions they make to moderate or remove online speech: platforms are free to decide their own moderation policies however they see fit. The OCPMA would flip that second protection on its head, shielding only platforms that agree to confine their moderation policies to a narrowly tailored set of rules. As EFF and a coalition of legal experts explained to the Senate Judiciary Committee:

This narrowing would create a strong disincentive for companies to take action against a whole host of disinformation, including inaccurate information about where and how to vote, content that aims to intimidate or discourage people from casting a ballot, or misleading information about the integrity of our election systems. S.4632 would also create a new risk of liability for services that “editorialize” alongside user-generated content. In other words, sites that direct users to voter-registration pages, that label false information with fact-checks, or that provide accurate information about mail-in voting, would face lawsuits over the user-generated content they were intending to correct.

It's easy to see the motivations behind the Section 230 provisions in this bill, but they simply don’t hold up to scrutiny. This bill is based on the flawed premise that social media platforms’ moderation practices are rampant with bias against conservative views; while a popular meme in some right-wing circles, this view doesn’t hold water. There are serious problems with platforms’ moderation practices, but the problem isn’t the liberal silencing the conservative; the problem is the powerful silencing the powerless. Besides, it’s absurd to suggest that the situation would somehow be improved by putting such severe limits on how platforms moderate; the Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more freeform.

The government forcing platforms to adopt a specific approach to moderation is not just a bad idea; in fact; it’s unconstitutional. As EFF explained in its own letter to the Judiciary Committee:

The First Amendment prohibits Congress from directly interfering with intermediaries’ decisions regarding what user-generated content they host and how they moderate that content. The OCPM Act seeks to coerce the same result by punishing services that exercise their rights. This is an unconstitutional condition. The government cannot condition Section 230’s immunity on interfering with intermediaries’ First Amendment rights.

Sen. Graham has also used the OCPMA as his vehicle to bring back the CASE Act, a 2019 bill that would have created a new tribunal for hearing “small” ($30,000!) copyright disputes, putting everyday Internet users at risk of losing everything simply for sharing copyrighted images or text online. This tribunal would exist within the Copyright Office, not the judicial branch, and it would lack important protections like the right to a jury trial and registration requirements. As we explained last year, the CASE Act would usher in a new era of copyright trolling, with copyright owners or their agents sending notices en masse to users for sharing memes and transformative works. When Congress was debating the CASE Act last year, its proponents laughed off concerns that the bill would put everyday Internet users at risk, clearly not understanding what a $30,000 fee would mean to the average family. As EFF and a host of other copyright experts explained to the Judiciary Committee:

The copyright small claims dispute provisions in S. 4632 are based upon S. 1273, the Copyright Alternative in Small-Claims Enforcement Act of 2019 (“CASE Act”), which could potentially bankrupt millions of Americans, and be used to target schools, libraries and religious institutions at a time when more of our lives are taking place online than ever before due to the COVID-19 pandemic. Laws that would subject any American organization or individual — from small businesses to religious institutions to nonprofits to our grandparents and children — to up to $30,000 in damages for something as simple as posting a photo on social media, reposting a meme, or using a photo to promote their nonprofit online are not based on sound policy.

The Senate Judiciary Committee plans to consider the OCPMA soon. This bill is far too much of a mess to be saved by amendments. We urge the Committee to reject it.

Vote for EFF on CREDO's October Ballot

EFF - 5 hours 13 min ago

Right now you can help EFF receive a portion of a $150,000 donation pool just by casting your vote! EFF is one of the three nonprofits featured in CREDO's giving group this month, so if you vote for EFF by October 31 you will direct a bigger piece of the donation pie toward protecting online freedom.

Since its founding, CREDO's mobile, energy, long distance, and credit card services customers have raised more than $90 million for charity. Their mobile customers generate the funds as they use paid services, and each month CREDO encourages the public to choose from three nonprofit recipients that drive positive change.

Anyone in the U.S. can visit the CREDO Donations site and vote for EFF, so spread the word! The more votes we receive, the higher our share of this month's donation pool.

EFF is celebrating its 30th year of fighting for technology users, and your rights online have never mattered more than they do today. Your privacy, access to secure tools, and free expression play crucial roles in seeing us through the pandemic, protecting human rights, and ensuring a brighter future online. Help defend digital freedom and vote for EFF today!

Broad Coalition Urges Court Not to Block California’s Net Neutrality Law

EFF - Wed, 09/30/2020 - 8:25pm

After the federal government rolled back net neutrality protections for consumers in 2017, California stepped up and passed a bill that does what FCC wouldn’t: bar telecoms from blocking and throttling Internet content and imposing paid prioritization schemes. The law, SB 822, ensures that that all Californians have full access to all Internet content and services—at lower prices.

Partnering with the ACLU of Northern California and numerous other public interest advocates, businesses and educators, EFF filed an amicus brief today urging a federal court to reject the telecom industry’s attempt to block enforcement of SB 822. The industry is claiming that California’s law is preempted by federal law—despite a court ruling that said the FCC can’t impose nationwide preemption of state laws protecting net neutrality.

Without legal protections, low-income Californians who rely on mobile devices for internet access and can’t pay for more expensive content are at a real disadvantage. Their ISPs could inhibit full access to the Internet, which is critical for distance learning, maintaining small businesses, and staying connected. Schools and libraries are justifiably concerned that without net neutrality protections, paid prioritization schemes will degrade access to material that students and public need in order to learn. SB 822 addresses that by ensuring that large ISPs do not take advantage of their stranglehold on Californians’ Internet access to slow or otherwise manipulate Internet traffic.

The large ISPs also have a vested interest in shaping Internet use to favor their own subsidiaries and business partners, at the expense of diverse voices and competition. Absent meaningful competition, ISPs have every incentive to leverage their last-mile monopolies to customers’ homes and bypass competition for a range of online services. That would mean less choice, lower quality, and higher prices for Internet users—and new barriers to entry for innovators. SB 822 aims to keep the playing field level for everyone.

These protections are important all of the time, but doubly so in crises like the ones California now faces: a pandemic, the resulting economic downturn, and a state wildfire emergency. And Internet providers have shown that they are not above using emergencies to exploit their gatekeeper power for financial gain. Just two years ago, when massive fires threatened the lives of rescuers, emergency workers, and residents, the Santa Clara fire department found that it’s “unlimited” data plan was being throttled by Verizon. Internet access on a vehicle the department was using to coordinate its fire response slowed to a crawl. When contacted, the company told firefighters that they needed to pay more for a better plan.

Without SB 822, Californians – and not just first responders – could find themselves in the same situation as the Santa Clara Fire Department: unable, thanks to throttling or other restrictions, to access information they need or connect with others. We hope the court recognizes how important SB 822 is and why the telecom lobby shouldn’t be allowed to block its enforcement.

Related Cases: California Net Neutrality Cases - American Cable Association, et al v. Xavier Becerra and United States of America v. State of California

Tell the Department of Homeland Security: Stop Collecting DNA and other Biometrics

EFF - Tue, 09/29/2020 - 1:25pm

We need your help. On September 11, 2020, the Department of Homeland Security (DHS) announced its intention to significantly expand both the number of people required to submit biometrics during routine immigration applications and the types of biometrics that individuals must surrender. This new rule will apply to immigrants and U.S. citizens alike, and to people of all ages, including, for the first time, children under the age of 14. It would nearly double the number of people from whom DHS would collect biometrics each year, to more than six million. The biometrics DHS plans to collect include palm prints, voice prints, iris scans, facial imaging, and even DNA—which are far more invasive than DHS’s current biometric collection of fingerprints, photographs, and signatures.  (For an incisive summary of the proposed changes, click here.)

DHS has given the public until October 13, 2020 to voice their concerns about this highly invasive and unprecedented proposal. If you want your voice heard on this important issue, you must submit a comment through the Federal Register. Your visit to their website and comment submission are subject to their privacy policy which you can read here.

Take action

fill out the form to tell the Department of Homeland Security not to expand the collection DNA and other biometrics

Immigrating to the United States, or sponsoring your family member to do so, should not expose your most intimate and sensitive personal data to the U.S. government. But that’s what this new rule will do, by permitting DHS to collect a range of biometrics at every stage of the “immigration lifecycle.” The government does not, and should not, take DNA samples of every person born on U.S. soil—so why should it do the same for immigrants coming to the United States or U.S. citizens seeking to petition a family member?  

We cannot allow the government to normalize, justify, or develop its capacity for the mass collection of DNA and other sensitive biometrics. This move by DHS brings us one step closer to mass dragnet genetic surveillance. It also risks that people’s biometric information will be vulnerable to breach or future misuse by expanding the types of biometrics collected from each individual, storing all data together in one database, and using a unique identifier to link several biometrics to each person. The U.S. government has shown time and time again that it cannot protect our personal data. In 2019, DHS admitted that the images of almost 200,000 people taken for its face recognition pilot, as well as automated license plate reader data, were released onto the dark web after a cyberattack compromised a subcontractor. In 2015, the Office of Personnel Management admitted a breach of 5.6 million fingerprints, in addition to the SSNs and other personal information of more than 25 million Americans. We cannot run the risk of similar infiltrations happening again with people’s DNA, voice prints, iris scans, or facial imaging.

Click the external link and tell DHS: I oppose the proposed rulemaking, which would allow the Department of Homeland Security to vastly increase the types of biometrics it collects, as well as double the number of people from whom it collects such biometrics, including children. These actions create security and privacy risks, put people in the United States under undue suspicion, and make immigrants and their U.S. citizen family members vulnerable to surveillance and harassment based on race, immigration status, nationality, and religion.

Take action

fill out the form to tell the Department of Homeland Security not to expand the collection DNA and other biometrics

California Community Leaders Call for Urgent Action on Broadband Access—Add Your Organization to the List

EFF - Tue, 09/29/2020 - 1:13pm

More than fifty California organizations, businesses, and public officials—including the AARP of California, the San Francisco Tech Council, the California Center for Rural Policy, the Khan Academy, and a number of California cities and counties—join Common Sense Kids Action and EFF in urging Governor Gavin Newsom to call the legislature back into a special session to address the state’s digital divide.

The COVID-19 pandemic has accentuated California's longstanding broadband access crisis. Governor Newsom himself has identified this as a pressing issue, with a recent executive order to establish a state goal of 100 mbps download speeds for all Californians. More than 2 million Californians lack access to high-speed broadband today. As KQED recently reported, that includes some 1.2 million students across the state who lack adequate Internet access to do their work. In a nationwide survey from Common Sense on the “homework gap,” 12 percent of teachers say a majority of their students lack home access to the internet or a computer to do schoolwork at home, though 20 percent of K-2 students and 41 percent of high school students need broadband internet access outside of school at least once a week.

And that’s in a normal year. But this is not a normal year. Lack of access has become an emergency for students today as schooling becomes remote in response to the pandemic. Many students with no access at home have been cut off from school computer labs, libraries, or other places where they may usually get the access they need. This type of situation is exactly what led to the viral pictures of two Salinas students—who clearly wanted to learn—doing their school work on the sidewalk outside the local Taco Bell.

It doesn’t have to be like this. California, home to the world’s fifth-largest economy, has solutions available. In this past legislative session, Sen. Lena Gonzalez built broad support for a deal that would have secured more than 100 million dollars a year to secure access to high-speed Internet for families, first responders, and seniors across the state. EFF and Common Sense were proud to sponsor that bill, but despite support from the California Senate and the governor’s office, California Assembly leadership refused to hear the bill and stopped it at the last moment.

California families face these problems every day—regardless of whether their representatives are willing to help them or not. But the people of California need help, and the state should move forward now to begin the work needed to finally close the digital divide.

The following organizations have already joined the call, and we hope Governor Newsom will listen.

If your organization also believes that California cannot wait to start closing the digital divide, please reach out to Senior Legislative Counsel Ernesto Falcon or Legislative Activist Hayley Tsukayama to add your name to the list. 

Signers:

AARP California

Access Humboldt

Access Now

California Center for Rural Policy

Canal Alliance

Central Coast Broadband Consortium

City of Daly City

City of Farmersville

City of Gonzales, City Council Member

City of Greenfield

City of Watsonville

Consumer Reports

Common Sense Kids Action

Council Member City of Gonzales

County of Monterey

EraseTheRedline Inc.

Electronic Frontier Foundation

Fight for the Future

Founder Academy

Gigabit Libraries Network

GoNoodle

Great School Choices

Indivisible Sacramento

InnovateEDU

Intertie Inc.

Khan Academy

King City Mayor Pro tempore

LA Tech4Good

Latino Community Foundation

MakeKnowledge

Mayor, City of Soledad

MediaJustice

Modesto City Councilmember, District 2

Monkeybrains

Monterey County Supervisor, District 1

My Yute Soccer

National Cristina Foundation

National Digital Inclusion Alliance

National Hispanic Media Coalition

New America's Open Technology Institute

Oakland African American Chamber of Commerce

Open Door Community Health Centers

OpenMedia

Peninsula Young Democrats

Public Knowledge

San Francisco Tech Council

School on Wheels

Schools, Health & Libraries Broadband (SHLB) Coalition

The Education Trust-West

The Greenlining Institute

Town of Colma

Trueblood Strategy

Bust 'Em All: Let's De-Monopolize Tech, Telecoms AND Entertainment

EFF - Mon, 09/28/2020 - 1:22pm

The early 1980s were a period of tremendous foment and excitement for tech. In the four years between 1980 and 1984, Americans met:

But no matter how exciting things were in Silicon Valley during those years, even more seismic changes were afoot in Washington, D.C., where a jurist named Robert Bork found the ear of President Reagan and a coterie of elite legal insiders and began to fundamentally reshape US antitrust law.

Bork championed an antitrust theory called "the consumer welfare standard," which reversed generations of American competition law, insisting that monopolies and monopolistic conduct were rarely a problem and that antitrust law should only be invoked when there was "consumer harm" in the form of higher prices immediately following from a merger or some other potentially anti-competitive action.

Tech and lax antitrust enforcement grew up together. For 40 years, we've lived through two entwined experiments: the Internet and its foundational design principle that anyone should be able to talk to anyone using any protocol without permission from anyone else; and the consumer welfare standard, and its bedrock idea that monopolies are not harmful unless prices increase.

It's not a pretty sight. Forty years on and much of the dynamism of technology has been choked out of the industry, with a few firms attaining seemingly permanent dominance over our digital lives, maintaining their rule by buying or merging with competitors, blocking interoperability, and holding whole markets to ransom.

Thankfully, things are starting to change. Congress's long-dormant appetite for fighting monopolists is awakening, with hard-charging hearings and far-reaching legislative proposals.

And yet... Anyone who hangs out in policy circles has heard the rumors: this was all cooked up by Big Cable, the telecom giants who have been jousting with tech over Net Neutrality, privacy, and every other measure that allowed the public to get more value out of the wires in our homes or the radio signals in our skies without cutting in the telecoms for a piece of the action.

Or perhaps it's not the telecoms: maybe it's Big Content, the giant, hyper-consolidated entertainment companies (five publishers, four movie studios, three record labels), whose war on tech freedom has deep roots: given all their nefarious lobbying and skullduggery, is it so hard to believe they'd cook up a fake grassroots campaign to defang Big Tech under color of reinvigorating antitrust?

In any event, why selectively enforce competition laws against tech companies, while leaving these other sectors unscathed?

Why indeed? Who said anything about leaving telecoms or entertainment untouched by antitrust? The companies that make up those industries are in desperate need of tougher antitrust enforcement, and we're here for it.

Who wouldn't be? Just look at the telecoms industry, where cable and phone companies have divided up the nation like the Pope dividing up the "New World," so that they never have to compete head to head with one another. This sector is the reason that Americans pay more for slower broadband than anyone else in the world, and the pandemic has revealed just how bad this is.

When Frontier declared bankruptcy early in the Covid-19 crisis, its disclosures revealed the extent to which American families were being victimized by these monopolies: Frontier's own planning showed that it could earn $800,000,000 in profits by providing 100gb fiber to three million households, but it did not, because the company's top execs were worried that spending money to build out this profitable fiber would make the company's share price dip momentarily, and since a) these execs are mostly paid in stock; and b) none of those households had an alternative, Frontier left nearly $1 billion on the table and three million households on ancient, unreliable, slow Internet connections.

The big telcos and cable operators are in sore need of adult supervision, competitive pressure, and Congressional hearings.

And things are no better in the world of entertainment, where a string of mergers —most recently the nakedly anticompetitive Disney-Fox merger—has left performers and creators high and dry, with audiences hardly faring any better.

Anyone who tells you that we shouldn't fight tech concentration because the telecom or entertainment industry is also monopolistic is missing the obvious rejoinder: we should fight monopoly in those industries, too.

In boolean terms, trustbusting tech, entertainment, and cable is an AND operation, not a XOR operation.

Besides, for all their public performance of hatred for one another, tech, content, and telcos are perfectly capable of collaborating to screw the rest of us. If you think tech isn't willing to sell out Net Neutrality, you need to pay closer attention. If you think that tech is the champion who'll keep the entertainment lobby from installing automated copyright filters, think again. And if you think all competing industries aren't colluding in secret to rig markets, we've got some disturbing news for you.

Surviving the 21st Century is not a matter of allying yourself with a feudal lord—choosing Team Content, Team Tech, or Team Telecomand hoping that your chosen champion will protect you from the depredations of the others.

If we're gonna make it through this monopolistic era of evidence-free policy that benefits a tiny, monied minority at the expense of the rest of us, we need to demand democratic accountability for market abuses, demand a pluralistic market where dominant firms are subjected to controls and penalties, where you finally realize birthright of technological self-determination.

If Big Cable and Big Content are secretly gunning for Big Tech with antitrust law, they're making a dangerous bet: that trustbusting will be revived only to the extent that it is used to limit Big Tech, and then it will return to its indefinite hibernation. That's not how this works. Even if tech is where the new trustbusting era starts, it's not where it will end.

Introducing “YAYA”, a New Threat Hunting Tool From EFF Threat Lab

EFF - Fri, 09/25/2020 - 4:03pm

At the EFF Threat Lab we spend a lot of time hunting for malware that targets vulnerable populations, but we also spend time trying to classify malware samples that we have come across. One of the tools we use for this is YARA. YARA is described as “The Pattern Matching Swiss Knife for Malware Researchers.” Put simply, YARA is a program that lets you create descriptions of malware (YARA rules) and scan files or processes with them to see if they match. 

The community of malware researchers has amassed a great deal of useful YARA rules over the years, and we use many of them in our own malware research efforts. One such repository of YARA rules is the Awesome YARA guide, which contains links to dozens of high-quality YARA repositories. 

Managing a ton of YARA rules in different repositories, plus your own sets of rules, can be a headache, so we decided to create a tool to help us manage our YARA rules and run scans. Today we are presenting this open source tool free to the public: YAYA, or Yet Another YARA Automation. 

Introducing YAYA

YAYA is a new open source tool to help researchers manage multiple YARA rule repositories. YAYA starts by importing a set of high-quality YARA rules and then lets researchers add their own rules, disable specific rulesets, and run scans of files. YAYA only runs on Linux systems for now. The program is geared towards new and experienced malware researchers, or those who want to get into malware research. No previous YARA knowledge is required to be able to run YAYA.

A video example of YAYA being run

If you are interested in getting YAYA or contributing to its development, you can find the github repository here. We hope this tool will make a useful addition to many malware researchers’ tool kits. 

The Government’s Antitrust Suit Against Google: Go Big and Do It Right

EFF - Fri, 09/25/2020 - 3:57pm

U.S. antitrust enforcers are reported to be crafting a lawsuit against Google (and its parent company, Alphabet). The Department of Justice and a large coalition of state attorneys general are meeting this week and could file suit very soon. While it will reportedly focus on Google’s dominance in online advertising and search, the suit could encompass many more facets of Google’s conduct, including its dominance in mobile operating systems and Web browsers. This suit has the potential to be the most significant antitrust case against a technology company in over 20 years, since the DOJ’s 1998 suit against Microsoft.

All users of Google products, and everyone who views Google-brokered ads—in other words, nearly all Internet users—have a stake in the outcome of this lawsuit. That’s why it needs to be done right. It should cover more than just advertising markets, and focus on Google’s search, browser, and mobile OS market power as well. And the antitrust authorities should pursue smart remedies that look beyond money damages, including breakups and requiring interoperability with competitive products, all while being careful to protect users’ free speech and privacy interests that will inevitably be caught up in the mix. Like many, we worry that if this is rushed, it might not go well, so we urge the enforcers to take the time they need to do it right, while recognizing that there is ongoing harm.

The Importance of “Big Case” Antitrust

U.S. government antitrust suits have a have an important place in the history of innovation, because even when they didn’t end with a court order to break up a company, there is a reasonable case to be made that they created breathing room for innovation. The DOJ’s suit against IBM, for “monopolizing or attempting to monopolize the general purpose electronic digital computer system market” began in 1969, went to trial in 1975, and was withdrawn in 1982. Even though the case didn’t end with a court-ordered remedy, a decade-plus of public scrutiny into IBM’s business practices arguably kept the company from squashing nascent competitors like Microsoft as it had squashed other rivals for decades. That result likely helped launch the personal computer revolution of the 1980s, fueled by the ideas of numerous companies beyond “Big Blue.”

The government’s suit against AT&T, which had held a legally recognized monopoly over telephone service in the U.S. for most of the twentieth century, almost ended the same way as the IBM suit. Filed in 1974, US v. AT&T had not reached a judgment by 1982. But years of scrutiny, combined with the rise of potential competitors who were itching to enter telephone service and related markets, led AT&T’s leadership to agree to a breakup, allowing the “Baby Bells” and technology-focused spinoffs to innovate in data communications and the development of the Internet in ways that “Ma Bell” could not. It’s hard to imagine the Internet entering widespread use in the 1990s with a monolithic AT&T still deciding who could connect to telephone networks and what devices they could use. Even though the successors to AT&T eventually re-assembled themselves, the innovation unleashed by the breakup is still with us.

The 1998 case against Microsoft over its exclusion of rival Web browser software from PCs was also a boon to innovation. At the time, an AT&T-style breakup of Microsoft, perhaps splitting its operating system and application software divisions into separate companies, was a distinct possibility. In fact, the trial court ordered just that. On appeal, the D.C. Circuit rejected a breakup and instead ordered Microsoft to adhere to ongoing restrictions on its behavior towards competing software vendors and PC manufacturers. Once again, this seemed like something less than shining success. But facing ongoing monitoring of its behavior towards competitors, Microsoft shelved plans to crush a little-known search company called Google.

Two decades later, Google has expanded into so many Internet-related markets that its position in many of them appears unassailable. Google has been able to buy out or drive out of business nearly every company that dares compete with it in a variety of markets. The engine of disruptive innovation has broken down again, and a bold, thoughtful antitrust challenge by federal and state enforcers may be just the thing to help revive it.

Look at All of Alphabet, Not Just Advertising

News reports suggest that the suit against Google may focus on the company’s dominance of Web advertising, which is alleged to depress the ad revenues available to publishers. The suit may also address Google’s conduct towards competitors in search markets, such as travel and product searches. It could also look at Google’s history of mergers and acquisitions, such as its 2007 purchase of the advertising company DoubleClick, to see if it led to it acquiring monopoly power in various markets.

On the scope of the lawsuit, antitrust enforcers should go big. A suit that challenges Google’s conduct in advertising markets alone would not benefit consumers very much. While Google’s vast collection of data about users’ browsing habits and preferences, derived from its ad networks, causes great harm to consumers, it’s not yet clear whether dozens of smaller competitors in the ad-tech space, some of which are extremely shady, will be better stewards of user privacy. The answer here involves both antitrust and much-needed new privacy law. That will help consumers across the board.

The DOJ and states should also challenge Google’s leveraging of its existing market power in search, Web browsing (Chrome), and mobile operating systems (Android) to shut out competition. The suit should also address whether Google’s collection and control of user data across so many apps and devices—plus data from the millions of websites connected to Google’s advertising networks—gives it an advantage in other markets that triggers antitrust scrutiny. These are, in part, novel and difficult claims under antitrust law. But enforcers should aim high. Success in holding Google to account for its wielding of monopoly power, even if it does not succeed entirely, can help the courts and possibly Congress adapt antitrust law for the digital age. It can also help create a bulwark against further centralization of the Internet. And as history shows, even a suit that doesn’t ultimately lead to a breakup can help make room for innovation from diverse sources beyond the walls of today’s tech giants.

Have A Big Toolbox of Remedies: Interoperability, Follow-On Innovation, Conduct Rules, and Breakups

It’s obvious by now that antitrust enforcers need to pursue remedies beyond money damages. Google and Alphabet are large enough to treat almost any fine as a cost of doing business without significantly changing their behavior. A court-ordered breakup of Alphabet should be seriously considered as part of the case, as well—perhaps an order to separate aspects of Google’s advertising business or to unwind Google’s most problematic acquisitions.

Breakups shouldn’t be the only arrow in the government’s quiver of remedies, though. Ordering Google to allow its competitors to build products that interoperate would strike at the heart of Google’s monopoly problem and might be an easier lift. As we’ve shown through many examples, building products that can connect to existing, dominant products, without permission from the incumbent vendor, can be a potent antimonopoly weapon. It can help put users back into control.

In Google’s case, that could mean ordering Google not to interfere with products that block Google-distributed ads and tracking. Allowing ad- and tracker-blocking technologies to flourish would allow users to shape the ad-tech market by favoring companies that collect less user data, and who are more responsible with the data they have. It could also spur innovation in that market (note: EFF’s Privacy Badger is one of those technologies, but this could help many services). In that scenario, Google would have to compete for users’ views and clicks by protecting their privacy.

Interoperability requirements imposed as an antitrust remedy, or as part of a settlement, avoid many of the problems that come with more generalized technology mandates because orders can be crafted to suit a particular company’s technologies and practices. It can involve ongoing monitoring to ensure compliance and help avoid problems with security or other issues that could arise with a one-size-fits-all approach.

The upcoming lawsuit against Google is a great opportunity to begin addressing the problems of Big Tech’s monopoly power. If enforcers act boldly, they can set new antitrust precedents that will promote competition throughout high-tech markets. They can also use this opportunity to advance interoperability as a spur to innovation and a monopoly remedy.

Students Are Pushing Back Against Proctoring Surveillance Apps

EFF - Fri, 09/25/2020 - 1:40pm

Special thanks to legal intern Tracy Zhang, who was lead author of this post.

Privacy groups aren’t the only ones raising the alarm about the dangers of invasive proctoring apps. Through dozens of petitions across the country, and the globe, students too are pushing school administrators and teachers to consider the risks these apps create.  

Schools must take note of this level of organized activism.

Students at the University of Texas at Dallas are petitioning the school to stop using the proctoring app Honorlock. The petition has over 6,300 signatures, notes that Honorlock can collect “your face, driver’s license, and network information,” and calls use of Honorlock a “blatant violation of our privacy as students.” Students at Florida International University are petitioning their school to stop using Honorlock as well, gathering over 7,200 signatures. They highlight the amount of data that Honorlock collects and that Honorlock is allowed to keep the information for up to a year and, in some cases, 2 years. Students at California State University Fullerton are petitioning the school to stop using Proctorio, calling it “creepy and unacceptable” that students would be filmed in their own house in order to take exams. The petition has over 4,500 signatures.  

But it’s not just privacy that's at stake. While almost all the petitions we’ve seen raise very real privacy concerns—from biometric data collection, to the often overbroad permissions these apps require over the students’ devices, to the surveillance of students’ personal environments—these petitions make clear that proctoring apps also raise concerns about security, equity and accessibility, cost, increased stress, and bias in the technology.  

A petition by the students at Washington State University, which has over 1,700 signatures, raises concerns that ProctorU is not secure, pointing to a July 2020 data breach in which the information of 440,000 users was leaked. Students at University of Massachusetts Lowell are petitioning the school to stop using Respondus, in particular calling out the access that its Ring-0 software has on students’ devices, and noting that the software “creates massive security vulnerabilities and attack vectors, and thus cannot be tolerated on personal devices under any circumstances.” The petition has over 1,000 signatures.

Students at the University of Colorado Boulder raise concerns about the accessibility of proctoring app Proctorio, saying that “the added stress of such an intrusive program may make it harder for students with testing anxiety and other factors to complete the tests.” The petition has over 1,100 signatures. The Ontario Confederation of University Faculty Associations wrote a letter speaking out about proctoring technologies, noting that the need for access to high-speed internet and newer computer technologies “increase [students’] stress and anxiety levels, and leave many students behind.” 

In addition to privacy concerns, the petition from students at Florida International University notes that because Honorlock requires a webcam and microphone, “students with limited access to technology or a quiet testing location” are placed at a disadvantage, and that the required use of such technology “does not account for students with difficult living situations.” A petition against Miami University’s use of Proctorio notes that its required use “discriminates against neurodivergent students, as it tracks a student’s gaze, and flags students who look away from the screen as ‘suspicious.’ This, too, “negatively impacts people who have ADHD-like symptoms.” The petition also noted that proctoring software often had difficulty recognizing students with black or brown skin and tracking their movements. Their petition has over 400 signatures.

Students have seen success through these petitions. A petition at The City University of New York, supported by the University Student Senate and other student body groups, resulted in the decision that faculty and staff may not compel students to participate in online proctoring. After students at the University of London petitioned against the use of Proctortrack, the university decided to move away from the third-party proctoring provider.

Students have seen success through these petitions.

Below, we’ve listed some of the larger petitions and noted their major concerns. There are hundreds more, and regardless of the number of signatures, it’s important to note that even a few concerned students, teachers, or parents can make a difference at their schools.  

As remote learning continues, these petitions and other pushback from student activists, parents, and teachers will undoubtedly grow. Schools must take note of this level of organized activism. Working together, we can make the very real concerns about privacy, equity, and bias in technology important components of school policy, instead of afterthoughts.  

***** 

If you want to learn more about defending student privacy, EFF has several guides and blog posts that are a good place to start.  

  • A detailed explanation of EFF’s concerns with unnecessary surveillance of proctoring apps is available here
  • Our Surveillance Self-Defense Guide to student privacy covers the basics of techniques that are often used to invade privacy and track students, as well as what happens to the data that’s collected, and how to protect yourself. 
  • Proctoring apps aren’t the only privacy-invasive tools schools have implemented. Cloud-based education services and devices can also jeopardize students’ privacy as they navigate the Internet—including children under the age of 13. From Chromebooks and iPads to Google Apps for Education, this FAQ provides an entry-point to learn about school-issued technology and the ramifications it can have for student privacy. 
  • Parents and guardians should also understand the risks created when schools require privacy-invasive apps, devices, and technologies. Our guide for them is a great place to start, with ways to take action, and includes a printable FAQ that can be quickly shared with other parents and brought to PTA meetings. 
  • All student privacy-related writing EFF does is collected on our student privacy page, which also includes basic information about the risks students are facing.
  • In the spring of 2017, we released the results of a survey that we conducted in order to plumb the depths of the confusion surrounding ed tech. And as it turns out, students, parents, teachers, and even administrators have lots of concerns—and very little clarity—over how ed tech providers protect student privacy.
  • COVID has forced many services online besides schools. Our guide to online tools during COVID explains the wide array of risks this creates, from online chat and virtual conferencing tools to healthcare apps.
  • Some schools are mandating that students install COVID-related technology on their personal devices, but this is the wrong call. In this blog post, we explain why schools must remove any such mandates from student agreements or commitments, and further should pledge not to mandate installation of any technology, and instead should present the app to students and demonstrate that it is effective and respects their privacy. 

*****

Below is a list of just some of the larger petitions against the required use of proctoring apps as of September 24, 2020. We encourage users to read the privacy policies of any website visited via these links.

  • Auburn University students note that “proctoring software is essentially legitimized spyware.”
  • NJIT petitioners write that while students agreed to take classes online, they “DID NOT agree to have [their] privacy invaded.”
  • CUNY students successfully leveraged 27,000 signatures to end the “despicable overreach” of proctoring app Proctorio.
  • Students at the University of Texas at Dallas, Dallas College, and Texas A&M called the use of Honorlock “both a blatant violation of our privacy as students and infeasible for many.”
  • University of Tennessee Chattanooga students say that “Proctorio claims to keep all information safe and doesn't store or share anything but that is simply not true. Proctorio actually keeps recordings and data on a cloud for up to 30 days after they have been collected.” 
  • Washington State University students note that in July, “ProctorU had a data breach of 440,000 students/people's information leaked on the internet.”
  • In a letter to the Minister of Colleges and Universities, the Ontario Confederation of University Faculty Associates argue that “Proctortrack and similar proctoring software present significant privacy, security, and equity concerns, including the collection of sensitive personal information and the need for access to high-speed internet and newer computer technologies, These requirements put students at risk, increase their stress and anxiety levels, and leave many students behind.” 
  • In a popular post, a self-identified student from Florida State University wrote on Reddit that “we shouldn't be forced to have a third-party company invade our privacy, and give up our personal information by installing what is in reality glorified spyware on our computers.” An accompanying petition by students at FSU says that using Honorlock “blatantly violates privacy rights.”
  • CSU Fullerton students call it “creepy and unacceptable” that students would be filmed in their own house in order to take exams, and declare they “will not accept being spied on!”
  • Miami University petitioners argue that “Proctorio discriminates against neurodivergent students, as it tracks a student's gaze, and flags students who look away from the screen as 'suspicious' too, which negatively impacts people who have ADHD-like symptoms.” The petition goes on to note that “students with black or brown skin have been asked to shine more light on their faces, as the software had difficulty recognizing them or tracking their movements.”
  • CU Boulder students say that, with Proctorio, the “added stress of such an intrusive program may make it harder for students with testing anxiety and other factors to complete the tests.”
  •  UW Madison students are concerned about Honorlock’s “tracking of secure data whilst in software/taking an exam (cookies, browser history); Identity tracking and tracing (driver's license, date of birth, address, private personal information); Voice Tracking as well as recognition (Specifically invading on privacy of other members of my home); Facial Recognition and storage of such data.”
  •  Florida International University students note that “Honorlock is allowed to keep [recordings of students] for up to a year, and in some cases up to 2 years.” The petition also notes that “Honorlock requires a webcam and microphone. This places students with limited access to technology or a quiet testing location at a disadvantage…You are required to be in the room alone for the duration of the exam. This does not account for students with difficult living situations.”
  •  Georgia Tech petitioners are concerned that data collected by Honorlock “could be abused, for example for facial recognition in surveillance software or to circumvent biometric safety system.”
  •  University of Central Florida students argue that “Honorlock is not a trustworthy program and students should not be forced to sign away their privacy and rights in order to take a test.”
  •  UMass Lowell students call out the “countless security vulnerabilities that are almost certainly hiding in the Respondus code, waiting to be exploited by malware and/or other forms of malicious software.”
  •  University of Regina students argue that “facial recognition software and biometric scanners have been shown to uphold racial bias and cannot be trusted to accurately evaluate people of color. Eye movement and body movement is natural and unconscious, and for many neurodivergent people is completely unavoidable."

How Police Fund Surveillance Technology is Part of the Problem

EFF - Wed, 09/23/2020 - 6:08pm

Law enforcement agencies at the federal, state, and local level are spending hundreds of millions of dollars a year on surveillance technology in order to track, locate, watch, and listen to people in the United States, often targeting dissidents, immigrants, and people of color. EFF has written tirelessly about the harm surveillance causes communities and its effect is well documented. What is less talked about, but no less disturbing, are the myriad ways agencies fund the hoarding of these technologies. 

In 2016, the U.S. Department of Justice reported on the irresponsible and unregulated use and deployment of police surveillance measures in the town of Calexico, California. One of the most notable examples of the frivolous spending culture includes spending roughly $100,000 in seized assets on surveillance equipment (such as James Bond-style spy glasses) to dig up dirt on city council members and complaint-filing citizens with the aim of blackmail and extortion. Another example: a report from the Government Accountability Office showed that U.S. Customs and Border Protection officers used money intended to buy food and medical equipment for detainees to instead buy tactical gear and equipment. 

Drawing attention to how police fund surveillance technology is a necessary step, not just to exposing the harm it does, but also to recognize how untransparent and unregulated the industry is. Massive amounts of funding for surveillance have allowed police to pay for dozens of technologies that residents have no control over, or even knowledge about. When police pay for use of predictive policing software, do town residents get an inside look at how it works before it deploys police to arrest someone? No, often because that technology is “proprietary” and the company will claim that doing so would give away trade secrets. Some vendors even tell police not to talk to the press about it without the company's permission or instruct cops to leave use of the technology out of arrest reports. When law enforcement pays private companies to use automated license plate readers, what oversight do the surveilled have to make sure that data is safe? None—and it often isn’t safe. In 2019, an ALPR vendor that was hacked allowed 50,000 Customs and Border Patrol license plate scans to leak onto the web.

Law enforcement will often frame surveillance technology as being solely a solution to crime–but when viewed as a thriving industry made up of vendors and buyers, we can see that police surveillance has a whole lot more to do with dollars and cents. And often it's that money that's driving surveillance decisions, and not the community's interests. 

How Police Fund Surveillance:
Asset Forfeiture

Civil asset forfeiture is a process that allows law enforcement to seize money and property from individuals suspected of being involved in a crime before they have been convicted or sometimes before they’ve even been charged. When a local law enforcement agency partners with a federal agency it can apply for a share of the resources seized through a process called “equitable sharing.” Law enforcement often spends these funds on electronic surveillance, such as wiretaps, but also on other forms of surveillance technology, such as automated license plate readers.

Private Benefactors 

Wealthy individuals can have an immense impact on public safety, and are often the sources behind large scale surveillance systems. Baltimore’s “Aerial Investigation Research,” which would place a spy plane over the city, was funded in part by billionaires Laura and John Arnold, who put up $3.7 million to fund the program. Another billionaire, Ripple’s Chris Larson, has donated millions to neighborhood business districts throughout the San Francisco Bay Area to install sophisticated camera networks to deter property crime. The San Francisco Police Department was given live access to these cameras for over a week in order to spy on BLM protestors, which invaded their privacy and violated a local surveillance ordinance.

In Atlanta, businessman Charlie Loudermilk gave the city $1 million in order to create the Loudermilk Video Integration Center where police receive live feeds from public and private cameras. 

These grants, gifts, and donations illustrate the imbalance of power when it comes to decisions about surveillance technology.

Federal Grants

The federal government often pursues its nationwide surveillance goals by providing money to local law enforcement agencies. The U.S. Department of Justice has an entire office devoted to these efforts: the Bureau of Justice Assistance (BJS). Through BJS, local agencies can apply for sums ranging from tens of thousands to millions of dollars for police equipment, including surveillance technology. Through Justice Assistance Grants (JAGs), agencies have acquired license plate readers and mobile surveillance units, along with other surveillance technologies. BJA even has a special grant program for body-worn cameras.

Meanwhile, the U.S. Department of Homeland Security has paid local agencies to acquire surveillance technology along the U.S.-Mexico border through the Urban Area Security Initiative and Operation Stonegarden, a program that encourages local police to collaborate on border security missions.

Private Foundations

Many foundations provide technology, or funds to purchase technology, to local law enforcement. This process is similar to the “dark money” phenomenon in election politics: anonymous donors can provide money to a non-profit, which then can pass it on to law enforcement. 

Police foundations receive millions of dollars a year from large corporations and individual donors. Companies like Starbucks, Target, Facebook, and Google all provide money to police foundations which go on to buy equipment ranging from long guns to full surveillance networks.

According to ProPublica, in 2007, Target single-handedly paid for the software at LAPD’s new state-of-the-art surveillance center.

Kickbacks Between Surveillance Vendors and Police Departments

Because selling police surveillance tech is such a lucrative industry, it is no surprise that an economy has cropped up of shady and unregulated kick back schemes. Under these arrangements, police receive economic incentives to promote the adoption of certain surveillance equipment–in their own jurisdiction, to the people they should be protecting, and even to other towns, states, and countries.

Microsoft developed the sweeping city-wide surveillance system, Domain Awareness Systems, for the New York City Police Department, which was built gradually over years and cost $30 million. Its formal unveiling in 2012 led Microsoft to receive a slew of requests to buy the technology from other cities. Now, according to the New York Times, the NYPD receives 30% of “gross revenues from the sale of the system and access to any innovations developed for new customers.”

This leads to a disturbing question that undergirds many of these public-private surveillance partnerships in which police get kickbacks: Does our society actually need that much surveillance, or are the police just profiting off its proliferation? The NYPD and Microsoft make money when a city believes it needs to invest in a large-scale surveillance system. That undermines our ability to know if the system actually works at reducing crime, because its users have an economic interest in touting its effectiveness. It also means that there are commercial enterprises that profit when you feel afraid of crime.

Ring, Amazon’s surveillance doorbells, now has over 1,300 partnerships with police departments across the United States. As part of this arrangement, police are offered free equipment giveaways in exchange for a number of residents downloading their Neighbors app or using a town’s discount code to purchase a Ring camera. These purchases are often subsidized by the town itself.

This raises the very troubling question: do police think you need a camera on your front door because your property is in danger, or are they hustling for a commission from Amazon when they make a sale?

This arrangement is sure to deepen the public’s distrust of police officers and their public safety advice. How would people know if safety tips are motivated by an attempt to sow fear, and by extension, sell cameras and build an accessible surveillance network?

They Don’t Buy Surveillance Equipment, They Use Yours 

Throughout the country, police have been increasingly relying on private surveillance measures to do the spying they legally or economically cannot do themselves. This includes Ring surveillance doorbells people put on their front door, license plate readers homeowner’s associations mount at the entrance to their community, and full camera networks used by business improvement districts. No matter who controls surveillance equipment, police will ask to use it. 

Thus, any movement toward scrutinizing how police fund surveillance must also include scrutiny of our own decisions as private consumers. The choice of individuals to install their own invasive technology ultimately enables police abuse of the technology. It also allows police to circumvent measures of transparency and accountability that apply to government-owned surveillance technology.

Conclusion:

Community Control of Police Surveillance (CCOPS) measures around the country are starting to bring public awareness and transparency to the purchase and use of surveillance tech. But there are still too few of these laws ensuring democratic control over acquisition and application of police technology.

With police departments increasingly spending more and more money for access to face recognition, video surveillance, automated license plate readers, and dozens of other specific pieces of surveillance tech, it’s time to scrutinize the many dubious and opaque funding streams that bankroll them. But oversight alone will likely never be enough, because funding is in the billions and comes from various hard-to-trace sources, new technologies are always on the rise, and their uses mostly go unregulated and undisclosed.

So we must push for huge cuts in spending on police surveillance technology across the nation. This is a necessary step to protect privacy, freedom, and racial justice. 

The Time Has Come to End the PACER Paywall

EFF - Tue, 09/22/2020 - 7:19pm

In a nation ruled by law, access to public court records is essential to democratic accountability. Thanks to the Internet and other technological innovations, that access should be broader and easier than ever. The PACER (Public Access to Court Electronic Records) system could and should play a crucial role in fulfilling that need. Instead, it operates as an economic barrier, imposing fees for searching, viewing, and downloading federal court records, making it expensive for researchers, journalists, and the public to monitor and report on court activity. It's past time for the wall to come down.

The bipartisan Open Courts Act of 2020 aims to do just that. The bill would provide public access to federal court records and improve the federal court’s online record system, eliminating PACER's paywall in the process. This week, EFF and a coalition of civil liberties organizations, transparency groups, retired judges, and law libraries, have joined together to push Congress and the U.S. Federal Courts to eliminate the paywall and expand access to these vital documents. In a letter (pdf) addressed to the Director of the Administrative Office of United States Courts, which manages PACER, the coalition calls on the AO not to oppose this important legislation.

Passage of the bill would be a huge victory for transparency. Any person interested in accessing the trove of federal public court records that currently reside in PACER, from lawyers, the press, to the public, must currently pay a fee of 10 cents per page for search results and 10 cents per page for documents retrieved. These costs add up quickly, and have been criticized by open-government activists such as the late Aaron Swartz and Public.Resource.Org. As noted in the letter, which was signed by Public Knowledge, Open the Government, R Street, and the Project on Government Oversight, among others,

"It is unjust to charge for court documents and place undue burdens on students, researchers, pro se litigants, and interested members of the public – not to mention the journalists who cover the courts. The fairest alternative is not moving from 10 cents a page to eight cents; it’s no price at all."

The Open Courts Act of 2020 offers a clear path for finally making PACER access free to all, and should be supported by anyone who wants the public to engage more readily with their federal courts system. EFF urges Congress and the courts to support this important legislation and remove the barriers that make PACER a gatekeeper to information, rather than the open path to public records that it ought to be. 

EFF, CDT Sue Government To Obtain Records About Federal Agencies Pulling Advertising From Platforms

EFF - Tue, 09/22/2020 - 12:56pm
Reducing Federal Dollars Based on Recipients’ Perceived Viewpoints Violates First Amendment

Washington, D.C.—The Electronic Frontier Foundation (EFF) and the Center for Democracy and Technology (CDT) today filed a Freedom of Information Act (FOIA) lawsuit against the government to obtain records showing whether federal agencies have cut their advertising on social media as part of President Trump’s broad attack on speech-hosting websites he doesn’t like.

In May Trump issued an executive order in retaliation against platforms that exercise their constitutionally-protected rights to moderate his and other users’ posts. The order gave executive departments and agencies 30 days to report their online ad spending to the Office of Management and Budget (OMB). The Department of Justice (DOJ) was charged with assessing the reports for alleged “viewpoint-based speech restrictions” that would make them “problematic vehicles” for government ads.

EFF and CDT seek records, reports, and communications about the spending review to shine a light on whether agencies have stopped or reduced online ad spending, or are being required to do so, based on a platform’s editorial decisions to flag or remove posts. Although the government has broad discretion to decide where it spends its advertising dollars, reducing or denying federal dollars they previously planned to spend on certain platforms based on officials’ perception of those entities' political viewpoints violates the First Amendment.

“The government can’t abuse its purse power to coerce online platforms to adopt the president’s view of what it means to be ‘neutral,’” EFF Staff Attorney Aaron Mackey. “The public has a right to know how the government is carrying out the executive order and if there’s evidence that the president is retaliating against platforms by reducing ad spending.”

On top of being unconstitutional, the ad spending reduction is also dangerous. Federal agencies advertise on social media to communicate important messages to the public, such as ads encouraging the use of masks to fight the spread of COVID-19 and warning of product recalls. Pulling online ads could have a potentially serious impact on the public’s ability to receive government messaging.

“The President continues his attempts to bully social media companies when he disagrees with their editorial choices. These threats are not only unconstitutional, but have real-life implications for internet users and voters alike,” said Avery Gardiner, CDT’s General Counsel. “CDT will continue to push for these documents to ensure the U.S. government isn’t using the power of its advertising purse to deter social media companies from fighting misinformation, voter suppression, and the stoking of violence on their platforms.”

EFF and CDT filed FOIA requests with OMB and DOJ in July for records; neither agency has responded or released responsive documents.

Trump issued the executive order aimed at online speech platforms a few days after Twitter marked his tweets for fact-checking because they were “potentially misleading.”

Private entities like social media companies, newspapers, and other digital media have a First Amendment right to edit and curate user-generated content on their sites. Trump’s order, which contains several provisions, including the one at issue in this lawsuit, would give the government power to punish platforms for their editorial decisions. It would strip them of protections provided by 47 U.S.C. § 230, often called Section 230, which grants online intermediaries broad immunity from liability arising from hosting users’ speech.

For the complaint:
https://www.eff.org/document/eff-cdt-v-omb-doj-foia-complaint

For more on the Executive Order:
https://www.eff.org/deeplinks/2020/05/dangers-trumps-executive-order-explained

Contact:  AaronMackeyStaff Attorneyamackey@eff.org AveryGardinerGeneral Counsel, Center for Democracy and Technologyagardiner@cdt.org

Exposing Your Face Isn't More Hygienic Way to Pay

EFF - Mon, 09/21/2020 - 8:38pm

A company called PopID has created an identity-management system that uses face recognition. Their first use case is as a system for in-store, point of sale payments using face recognition as authorization for payment.

They are promoting it as a tool for restaurants, claiming that it is pandemic-friendly because it is contactless.

Nonetheless, the PopID payment system is less secure than alternatives, unfriendly to privacy, and is likely riskier than other payment alternatives for anyone concerned about catching COVID-19. On top of these issues, PopID is pitching it as a screening tool for COVID-19 infection, another task that it's completely unsuited for.

Equities issues

It's important that payment systems not disadvantage cash payments, which have the best social equity. Many people are under-banked and in hard times such as these, many people use cash as a way to help them manage their budgets and spending. Cash is also the most privacy-friendly way to pay. As convenient as other systems are, and despite cash not being contactless, we need to protect people's ability to use cash1.

PopID is a charge-up-and-spend system. To lower their costs, PopID has its users charge up an account wn ith them using a credit card or debit card, and payments are deducted from that. Charge-and-spend systems are good for the store, and less good for the person using them; they amount to an interest-free loan that the consumer gives the merchant. This is no small thing: Starbucks, PayPal, and Walmart all have billions in interest-free loans from their customers. This further disadvantages people with budgets, as it requires them to give PopID money before it is spent and keep a balance in their system in anticipation of spending it.

PopID also requires their customers to have a smartphone for enrollment-by-selfie, which disadvantages those who don't have one.

To be fair, these issues are largely fixable. PopID could allow someone to enroll without a phone at any payment station. They could allow charge-up with cash, and they could allow direct charge2. But for now, the company does not offer these easy solutions.

Fitness to task

Looking beyond its potentially fixable perpetuation of systematic inequalities, it's important that a system actually do what it's intended to do. PopID is pitching it as a pandemic-friendly system, providing both contactless payments and as a COVID-19 screening device, using the camera as a temperature sensor. Neither of these is a good idea.

Temperature scanning with commodity cameras won't work

PopID promotes their system as a temperature scanning device for employees and customers alike. Temperature screening itself has limited benefit, as around half the people who have COVID-19 are asymptomatic.

Moreover, accurate temperature screening is expensive and hard. PopID is not the only organization to promote cheap face recognition with COVID-19 screening as the excuse. In reality, the cheap camera in a point-of-sale terminal is both inaccurate and intrusive as Jay Stanley of the ACLU describes in detail.

There's a wide range in the accuracy of temperature-scanning cameras, in normal human body temperature across a population, and even an individual's temperature based on time of day and their physical activities. Even the best cameras are finicky, not working accurately if people are wearing hats, glasses, or masks, and require the camera to view only one subject at a time.

Speeding up a sandwich shop line does help prevent COVID-19, because we know that spending too much time too close to other people is the primary mode of transmission. But, temperature scanning along with payment doesn't help people space themselves out or have shorter contact.

Face recognition raises COVID-19 risks

PopID pitches their system as good during the pandemic because it is contactless. Yet it is worse than payment alternatives.

PopID's web site shows a picture of a payment terminal, with options to use contactless payment systems such as Apple Pay, Google Pay, and Samsung Pay. Presumably, any contactless credit card could be used. Additionally, a barcode system like the one Starbucks uses is contactless.

PopID's point of sale terminal

Any of these contactless payment alternatives are much better than PopID from a public health standpoint because they don't require someone to remove their mask. The LA Times article comments parenthetically, "(The software struggles at recognizing faces with masks.)"

Indeed, any contactless payment system has less contact than using cash, yet even cash is low-risk. Almost all COVID-19 transmission is through breathing in virus particles in droplets or aerosols, not from fomites that we touch. Moreover, cash is easy to wash in soapy water.

This is a big deal for a supposedly pandemic-friendly system. The most recent restaurant-based superspreading event in the news is particularly relevant. A person in South Korea sat for two hours in a coffee shop under the air conditioning, and spread the disease to twenty-seven other people, who in turn spread it to twenty-nine other people, for a total of fifty-six people. And yet, none of the mask-wearing employees got the virus.

This is particularly relevant to PopID; a contactless system that makes someone take off a mask endangers the other customers. Ironically, if a customer sees a store using PopID, they better be wearing a mask because PopID is requiring them to come off momentarily. Or they could just shop somewhere else.

Security

PopID brings in new security risks that do not exist in other systems. They have the user's payment information (for charging up the payment store), their phone number (it's part of registration), name, and of course the selfie that's used for face recognition. There's no reason to suppose they're any worse than the cloud services that inevitably lose people information, but no reason to think they're better. Thus, we should assume that eventually a hacker's going to get all that information.

However, being a payment system, there is the obvious additional risk of fraud. PopID says, "Your Face now becomes your singular, ultra-secure 'digital token' across all PopID transactions and devices," yet that can't possibly be so.

Face recognition systems are well-known to be inaccurate as NIST recently showed, particularly with Black, Indigenous, Asian and other People of Color, women, and also Trans or nonbinary people. False positives are common and in a payment system, a false positive means a false charge. PopID says they will confirm any match through the verification process of asking someone their name. To be fair, this is not a bad secondary check but is hardly "ultra-secure." Moreover, it requires every PopID customer to tell the whole store their name (or use a PopID pseudonym).

Lastly, PopID doesn't say how they'll permit someone to dispute charges, an important factor since the credit card industry is regulated with excellent consumer protection. In the event of fraud, it's much easier to be issued a new credit card than a new face.

The end result is that PopID's pay-by-face is less secure than using a contactless card, and less secure than cash.

Privacy

PopID is an incipient privacy nightmare. The obvious privacy issues of an unregulated payment system that knows where your face has been is only the start of the problem. The LA Times writes:

But [CEO of PopID, John] Miller’s vision for a face-based network goes beyond paying for lunch or checking in to work. After users register for the service, he wants to build a world where they can “use it for everything: at work in the morning to unlock the door, at a restaurant to pay for tacos, then use it to sign in at the gym, for your ticket at the Lakers game that night, and even use it to authenticate your age to buy beers after.”

“You can imagine lots of things that you can do when you have a big database of faces that people trust,” Miller said.

Nothing more needs to be said. PopID as a payment system is a stalking horse for a face-surveillance panopticon and salable database of trusted faces.

Conclusion

PopID is less secure and less private than alternative forms of payment, contactless or not. It brings with it a lot of social equity issues that negatively impact marginalized communities. Moreover, any store using PopID and thus requiring other people to remove their masks to pay is exposing you to COVID-19 that you would not otherwise be exposed to.

Most alarmingly, it is also an insecure for-profit surveillance system building a database of you, your face, your purchases, your movements, and your habits.

  1. This is a complex issue in that we all intuitively think of money as dirty, and in pandemic times, this is even more on everyone's mind. However, the evidence at this writing (September, 2020) is that exposure through touch is possible but not common, while transmission through breath is the way almost all transmission occurs. ↩︎

  2. Yes, these potential fixes are in tension with each other. If someone using the system wants to be cash-only, they're going to have to have a pre-paid balance in the system. In the other direction, direct-charge system has higher costs to PopID, but that's their business issue, and not the customer's. ↩︎

A Look-Back and Ahead on Data Protection in Latin America and Spain

EFF - Mon, 09/21/2020 - 2:31pm

We're proud to announce a new updated version of The State of Communications Privacy Laws in eight Latin American countries and Spain. For over a year, EFF has worked with partner organizations to develop detailed questions and answers (FAQs) around communications privacy laws. Our work builds upon previous and ongoing research of such developments in Argentina, Brazil, Chile, Colombia, Mexico, Paraguay, Panama, Peru, and Spain. We aim to understand each country’s legal challenges, in order to help us spot trends, identify the best and worst standards, and provide recommendations to look ahead. This post about data protection developments in the region is one of a series of posts on the current State of Communications Privacy Laws in Latin America and Spain. 

As we look back at the past ten years in data protection, we have seen considerable legal progress in granting users’ control over their personal lives. Since 2010, sixty-two new countries have enacted data protection laws, giving a total of 142 countries with data protection laws worldwide. In Latin America, Chile was the first country to adopt such a law in 1999, followed by Argentina in 2000. Several countries have now followed suit: Uruguay (2008), Mexico (2010), Peru (2011), Colombia (2012), Brazil (2018), Barbados (2019), and Panama (2019). While there are still different privacy approaches, data protection laws are no longer a purely European phenomenon.

Yet, contemporary developments in European data protection law continue to have an enormous influence in the region—in particular, the EU's 2018 General Data Protection Regulation (GDPR). Since 2018, several countries, including Barbados and Panama have led the way in adopting GDPR-inspired laws in the region, promising the beginning of a new generation of data protection legislation. In fact, the privacy protections of Brazil’s new GDPR-inspired law took effect this month, on September 18, after the Senate pushed back on a delaying order from President Jair Bolsonaro.

But when it comes to data protection in the law enforcement context, few countries have adopted the latest steps of the European Union. The EU Police Directive, a law on the processing of personal data for police forces, has not yet become a Latin American phenomenon. Mexico is the only country with a specific data protection regulation for the public sector. In doing so, countries in the Americas are missing a crucial opportunity to strengthen their communications privacy safeguards with rights and principles common to the global data protection toolkit.

New GDPR-Inspired Data Protection Laws
BrazilBarbados, and Panama have been the first countries in the region to adopt GDPR-inspired data protection laws. Panama’s law, approved in 2019, will enter into force in March 2021. 

Brazil’s law has faced an uphill battle. The provisions creating the oversight authority came into force in December 2018, but it took the government one and a half years to introduce a decree implementing its structure. The decree, however, will only have legal force when the President of the Board is officially appointed and approved by the Senate. No appointment has been made as of the publication of this post. For the rest of the law, February 2020 was the original deadline to enter into force. This was later changed to August 2020. The law was then further delayed to May 2021 through an Executive Act issued by President Bolsonaro. Yet, in a surprising positive twist, Brazil's Senate stopped President Bolsonaro’s deferral in August. That means the law is now in effect, except for the penalties' section which have been deferred again, to August 2021. 

Definition of Personal Data 
Like the GDPR, Brazil and Panama's laws include a comprehensive definition of personal data. It includes any information concerning an identified or identifiable person. The definition of personal data in Barbados’s law has certain limitations. It only protects data which relates to an individual who can be identified “from that data; or from that data together with other information which is in the possession of or is likely to come into the possession of the provider.” Anonymized data in Brazil, Panama, and Barbados falls outside the scope of the law. There are also variations in how these countries define anonymized data.

Panama defines it as data that cannot be re-identified by reasonable means. However, the law doesn't set explicit parameters to guide this assessment. Brazil’s law makes it clear that anonymized data will be considered personal data if the anonymization process is reversed using exclusively the provider’s own means, or if it can be reversed with reasonable efforts. The Brazilian law defines objective factors to determine what’s reasonable such as the cost and time necessary to reverse the anonymization process, according to the technologies available, and exclusive use of the provider's own means.  These parameters affect big tech companies with extensive computational power and large collections of data, which will need to determine if their own resources could be used to re-identify anonymized data. This provision should not be interpreted in a way that ignores scenarios where the sharing or linking of anonymized data with other data sets, or publicly available information, leads to the re-identification of the data.

Right to Portability 
The three countries grant users the right to portability—a right to take their data from a service provider and transfer it elsewhere. Portability adds to the so-called ARCO (Access, Rectification, Cancellation, and Opposition) rights—a set of users’ rights that allow them to exercise control over their own personal data.

Enforcers of portability laws will need to make careful decisions about what happens when one person wants to port away data that relates both to them and another person, such as their social graph of contacts and contact information like phone numbers. This implicates the privacy and other rights of more than one person. Also, while portability helps users leave a platform, it doesn’t help them communicate with others who still use the previous one. Network effects can prevent upstart competitors from taking off. This is why we also need interoperability to enable users to interact with one another across the boundaries of large platforms. 

Again, different countries have different approaches. The Brazilian law tries to solve the multi-person data and interoperability issues by not limiting the "ported data'' to data the user has given to a provider. It also doesn't detail the format to be adopted. Instead, the data protection authority can set the standards among others for interoperability, security, retention periods, and transparency. In Panama, portability is a right and a principle. It is one of the general data protection principles that guide the interpretation and implementation of their overarching data protection law. As a right, it resembles the GDPR model. The user has the right to receive a copy of their personal data in a structured, commonly used, and machine-readable format. The right applies only when the user has provided their data directly to the service provider  and has given their consent or when the data is needed for the execution of a contract. Panama’s law expressly states that portability is “an irrevocable right” that can be requested at any moment. 

Portability rights in Barbados are similar to those in Panama. But, like the GDPR, there are some limitations. Users can only exercise their rights to directly port their data from one provider to another when technically feasible. Like Panama, users can port data that they have provided themselves to the providers, and not data about themselves that other users have shared.  

Automated Decision-Making About Individuals
Automated decision-making systems are making continuous decisions about our lives to aid or replace human decision making. So there is an emerging GDPR-inspired right not to be subjected to solely automated decision-making processes that can produce legal or similarly significant effects on the individual. This new right would apply, for example, to automated decision-making systems that use “profiles” to predict aspects of our personality, behavior, interests, locations, movements, and habits. With this new right, the user can contest the decisions made about them, and/or obtain an explanation about the logic of the decision. Here, too, there are a few variations among countries. 

Brazilian law establishes that the user has a right to review decisions affecting them that are based solely on the automated processing of personal data. These include decisions intended to define personal, professional, consumer, or credit profiles, or other traits of someone’s personality. Unfortunately, President Bolsonaro vetoed a provision requiring human review in this automated-decision-making. On the upside, the user has a right to request the provider to disclose information on the criteria and procedures adopted for automated decision-making, though unfortunately there is an exception for trade and industrial secrets.

In Barbados, the user has the right to know, upon request to the provider, about the existence of decisions based on automated processing of personal data, including profiling. As in other countries, this includes access to information about the logic involved and the envisaged consequences on them. Barbados users also have the right not to be subject to automated decision-making processes without human involvement, and to automated decisions that will produce legal or similarly significant effects on the individual, including profiling. There are exceptions for when automated decisions are: necessary for entering or performing a contract between the user and the provider; authorized by law; or based on user consent. Barbados has defined consent similar to the GDPR’s definition. That means there must be a freely given, specific, informed, and unambiguous indication of the user's wishes to the processing of their personal data. The user has the ability to change their mind. 

Panama law also grants users the right not to be subject to a decision based solely on automated processing of their personal data, without human involvement, but this right only applies when the process produces negative legal effects concerning the user or detrimentally affects the users’ rights. As in Barbados, Panama allows automated decisions that are necessary for entering or performing a contract, based on the user’s consent, or permitted by law. But Panama defines “consent” in a less user-protective manner: when a person provides a “manifestation” of their will.

Legal Basis for the Processing of Personal Data
It is important for data privacy laws to require service providers to have a valid lawful basis in order to process personal data, and to document that basis before starting the processing. If not, the processing will be unlawful. Data protection regimes, including all principles and user's rights, must apply regardless of whether consent is required or not.

Panama’s new law allows three legal bases other than consent: to comply with a contractual obligation, to comply with a legal obligation, or as authorized by a particular law. Brazil and Barbados set out ten legal bases for personal data processing—four more than the GDPR, with consent as only one of them. Brazilian and Barbados law seeks to balance this approach by providing users with clear and concise information about what providers do with their personal data. It also grants users the right to object to the processing of their data, which allows users to stop or prevent processing. 

Data Protection in the Law Enforcement Context
Latin America lags on a comprehensive data protection regime that applies not just to corporations, but also to public authorities when processing personal data for law enforcement purposes. The EU, on the other hand, has adopted not just the GDPR but also the EU Police Directive, a law that regulates the processing of personal data for police forces. Most Latam data protection laws exempt law enforcement and intelligence activities from the application of the law. However, in Colombia, some data protection rules apply to the public sector. That nation’s GDPL applies to the public sector, with exceptions for national security, defense, anti-money-laundering regulations, and intelligence. The Constitutional Court has stated that these exceptions are not absolute exclusions from the law’s application, but an exemption to just some provisions. Complementary statutory law should regulate them, subject to the proportionality principle. 

Spain has not implemented the EU’s Police Directive yet. As a result, personal data processing for law enforcement activities remains held to the standards of the country's previous data protection law. Argentina's and Chile's laws do apply to law enforcement agencies, and Mexico has a specific data protection regulation for the public sector. But Peru and Panama exclude law enforcement agencies from the scope of their data protection laws. Brazil's law creates an exception to personal data processing solely for public safety, national security, and criminal investigations. Still, it lays down that specific legislation has to be approved to regulate these activities. 

Recommendations and Looking Ahead
Communication privacy has much to gain with the intersection of its traditional inviolability safeguards and the data protection toolkit. That intersection helps entrench international human rights standards applicable to law enforcement access to communications data. The principles of data minimization and purpose limitation in the data protection world correlate with the necessity, adequacy, and proportionality principles under international human rights law. They are necessary to curb massive data retention or dragnet government access to data. The idea that any personal data processing requires a legitimate basis upholds the basic tenets of legality and legitimate aim to place limitations on fundamental rights. Law enforcement access to communications data must be clearly and precisely prescribed by law. No other legitimate basis than the compliance with a legal obligation is acceptable in this context. 

Data protection transparency and information safeguards reinforce a user’s right to a notification when government authorities have requested their data. European courts have asserted this right stems from privacy and data protection safeguards. In the Tele2 Sverige AB and Watson cases, the EU Court of Justice (CJEU) held that "national authorities to whom access to the retained data has been granted must notify the persons affected . . . as soon as that notification is no longer liable to jeopardize the investigations being undertaken by those authorities." Before that, in Szabó and Vissy v. Hungary, the European Court of Human Rights (ECHR) had declared that notifying users of surveillance measures is also inextricably linked to the right to an effective remedy against the abuse of monitoring powers.

Data protection transparency and information safeguards can also play a key role in fostering greater insight into companies' and governments' practices when it comes to requesting and handing over users' communications data. In collaboration with EFF, many Latin American NGOs have been pushing Internet Service Providers to publish their law enforcement guidelines and aggregate information on government data requests. We've made progress over the years, but there's still plenty of room for improvement. When it comes to public oversight, data protection authorities should have the legal mandate to supervise personal data processing by public entities, including law enforcement agencies. They should be impartial and independent authorities, conversant in data protection and technology, and have adequate resources in exercising the functions assigned to them.

There are already many essential safeguards in the Latam region. Most countries’ constitutions have explicitly recognized privacy as a fundamental right, and most have adopted data protection laws.  Each constitution recognized a general right to private life or intimacy or a set of multiple, specific rights: a right to the inviolability of communications; an explicit data protection right (Chile, Mexico, Spain); or “habeas data” (Argentina, Peru, Brazil) as either a right or legal remedy. (In general, habeas data protects the right of any person to find out what data is held about themselves.) And, most recently, a landmark ruling of Brazil’s Supreme Court has recognized data protection as a fundamental right drawn from the country’s Constitution.

Across our work in the region, our FAQs help to spot loopholes, flag concerning standards, or highlight pivotal safeguards (or lack thereof). It's clear that the rise of data protection laws has helped secure user privacy across the region: but more needs to be done. Strong data protection rules that apply to law enforcement activities would enhance communication privacy protections in the region. More transparency is urgently needed, both in how the regulations will be implemented, and what additional work private companies and the public sector are taking to pro-actively protect user data.

We invite everyone to read these reports and reflect on what work we should champion and defend in the days ahead, and what still needs to be done.

Three Interactive Tools for Understanding Police Surveillance

EFF - Thu, 09/17/2020 - 6:05pm

This post was written by Summer 2020 Intern Jessica Romo, a student at the Reynolds School of Journalism at University of Nevada, Reno. 

As law enforcement and government surveillance technology continues to become more and more advanced, it has also become harder for everyday people to avoid. Law enforcement agencies all over the United States are using body-worn cameras, automated license plate readers, drones, and much more—all of which threat people's right to privacy. But it's often difficult for people to even become aware of what technology is being used where they live. 

The Electronic Frontier Foundation has three interactive tools that help you learn about the new technologies being deployed around the United States and how they impact you: the Atlas of Surveillance, Spot the Surveillance, and Who Has Your Face?

The Atlas of Surveillance
https://atlasofsurveillance.org 

The Atlas of Surveillance is a database and map that will help you understand the magnitude of surveillance at the national level, as well as what kind of technology is used locally where you live.   

Developed in partnership with the University of Nevada, Reno's Reynolds School of Journalism, the Atlas of Surveillance is a dataset with more than 5,500 points of information on technology surveillance used by law enforcement agencies across the United States. Journalism students and EFF volunteers gathered online research, such as news articles and government records, on 10 common surveillance technologies and two different types of surveillance command centers. 

By clicking any point on the map, you will get the name of an agency and a description of the technology. If you toggle the interactive legend, you can see how each technology is spreading across the country. You can also search a simple-to-use text version of the database of all the research, including links to news articles or documents that confirm the existence of the technology in that region. 

Who Has Your Face?
https://whohasyourface.org/

Half of all adults in the United States likely have their image in a law enforcement facial recognition database, according to a 2016 report from the Center on Privacy & Technology at Georgetown Law. Today, that number is probably higher. But what about your face? 

Face recognition is a form of biometric surveillance that uses software to automatically identify or track someone based on their physical characteristics. People are subjected to face recognition in hundreds of cities around the country. Government has a number of uses for the technology, from screening passengers at airports to identifying protesters caught on camera. 

Who Has Your Face? is a short quiz that allows a user to see which government agencies can access their official photographs (such as a driver's license or a mugshot) and whether investigators can apply face recognition technology to those photos.

The site doesn't collect personal information, but it does ask five basic questions, such as whether you have a driver's license and if so, what state issued it. Based on your choice, the system will automatically generate a list of agencies that could potentially access your images. 

It also includes a resource page listing what each state’s DMV and other agencies can access. 

Spot the Surveillance
https://eff.org/spot

If you drove past an automated license plate reader, would you know what it looks like? Ever look closely at the electronic devices carried by police officers? Most of the time, people might not even notice when they've walked into the frame of surveillance technology. 

Spot the Surveillance is a virtual reality experience where you will learn how to identify surveillance technology that local law enforcement agencies use.

The experience takes place in a San Francisco neighborhood, where a resident is having an interaction with two police officers. You'll look in every direction to find seven technologies being deployed. After you find each technology, you’ll learn more about how it operates and how it’s used by police. Afterwards, you can try your new skills to identify these technologies in real life in your hometown.  

Spot the Surveillance works with most VR headsets, but is also available to use on a regular web browser. There’s also a Spanish version.


Get To Know The Surveillance That’s Getting To Know You

EFF has fought back against surveillance for decades, but we need your help. What other interactive tools would you like to see? Let us know on social media or by emailing info@eff.org, so we can continue to help you protect your privacy. 

Plaintiffs Continue Effort to Overturn FOSTA, One of the Broadest Internet Censorship Laws

EFF - Thu, 09/17/2020 - 3:34pm

Special thanks to legal intern Ross Ufberg, who was lead author of this post.

A group of organizations and individuals are continuing their fight to overturn the Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA, arguing that the law violates the Constitution in multiple respects.

In legal briefs filed in federal court recently, plaintiffs Woodhull Freedom Foundation, Human Rights Watch, the Internet Archive, Alex Andrews, and Eric Koszyk argued that the law violates the First and Fifth Amendments, and the Constitution’s prohibition against ex post facto laws. EFF, together with Daphne Keller at the Stanford Cyber Law Center, as well as lawyers from Davis Wright Tremaine and Walters Law Group, represent the plaintiffs.

How FOSTA Censored the Internet

FOSTA led to widespread Internet censorship, as websites and other online services either prohibited users from speaking or shut down entirely. FOSTA accomplished this comprehensive censorship by making three major changes in law:

First, FOSTA creates a new federal crime for any website owner to “promote” or “facilitate” prostitution, without defining what those words mean. Organizations doing educational, health, and safety-related work, such as The Woodhull Foundation, and one of the leaders of the Sex Workers Outreach Project USA (SWOP USA), fear that prosecutors may interpret advocacy on behalf of sex workers as the “promotion” of prostitution. Prosecutors may view creation of an app that makes it safer for sex workers out in the field the same way. Now, these organizations and individuals—the plaintiffs in the lawsuit—are reluctant to exercise their First Amendment rights for fear of being prosecuted or sued.

Second, FOSTA expands potential liability for federal sex trafficking offenses by adding vague definitions and expanding the pool of enforcers. In addition to federal prosecution, website operators and nonprofits now must fear prosecution from thousands of state and local prosecutors, as well as private parties. The cost of litigation is so high that many nonprofits will simply cease exercising their free speech, rather than risk a lawsuit where costs can run into the millions, even if they win.

Third, FOSTA limits the federal immunity provided to online intermediaries that host third-party speech under 47 U.S.C. § 230 (“Section 230”). This immunity has allowed for the proliferation of online services that host user-generated content, such as Craigslist, Reddit, YouTube, and Facebook. Section 230 helps ensure that the Internet supports diverse and divergent viewpoints, voices, and robust debate, without every website owner needing to worry about being sued for their users’ speech. The removal of Section 230 protections resulted in intermediaries shutting down entire sections or discussion boards for fear of being subject to criminal prosecution or civil suits under FOSTA.

How FOSTA Impacted the Plaintiffs

In their filings asking a federal district court in Washington, D.C. to rule that FOSTA is unconstitutional, the plaintiffs describe how FOSTA has impacted them and a broad swath of other Internet users. Some of those impacts have been small and subtle, while others have been devastating.

Eric Koszyk is a licensed massage therapist who heavily relied on Craigslist’s advertising platform to find new clients and schedule appointments. Since April 2018, it’s been hard for Koszyk to supplement his families’ income with his massage business. After Congress passed FOSTA, Craigslist shut down the Therapeutic Services of its website, where Koszyk had been most successful at advertising his services. Craigslist further prohibited him from posting his ads anywhere else on its site, despite the fact that his massage business is entirely legal. In a post about FOSTA, Craigslist said that they shut down portions of their site because the new law created too much risk. In the two years since Craigslist removed its Therapeutic Services section, Koszyk still hasn’t found a way to reach the same customer base through other outlets. His income is less than half of what it was before FOSTA.

Alex Andrews, a national leader in fighting for sex worker rights and safety, has had her activism curtailed by FOSTA. As a board member of SWOP USA, Andrews helped lead its efforts to develop a mobile app and website that would have allowed sex workers to report violence and harassment. The app would have included a database of reported clients that workers could query before engaging with a potential client, and would notify others nearby when a sex worker reported being in trouble. When Congress passed FOSTA, Alex and SWOP USA abandoned their plans to build this app. SWOP USA, a nonprofit, simply couldn’t risk facing prosecution under the new law.

FOSTA has also impacted a website that Andrews helped to create. The website Rate That Rescue is “a sex worker-led, public, free, community effort to help everyone share information” about organizations which aim to help sex workers leave their field or otherwise assist them. The website hosts ratings and reviews. But without the protections of Section 230, in Andrews’ words, the website “would not be able to function” because of the “incredible liability for the content of users’ speech.” It’s also likely that Rate That Rescue’s creators face criminal liability under FOSTA’s new criminal provisions because the website aims to make sex workers’ lives and work safer and easier. This could be considered to violate FOSTA’s provisions that make it a crime to promote or facilitate prostitution.

 Woodhull Freedom Foundation advocates for sexual freedom as a human right, which includes supporting the health, safety, and protection of sex workers. Each year, Woodhull organizes a Sexual Freedom Summit in Washington, DC, with the purpose of bringing together educators, therapists, legal and medical professionals, and advocacy leaders to strategize on ways to protect sexual freedom and health. There are workshops devoted to issues affecting sex workers, including harm reduction, disability, age, health, and personal safety. This year, COVID-19 has made an in person meeting impossible, so Woodhull is livestreaming some of the events. Woodhull has had to censor their ads on Facebook, and modify their programming on YouTube, just to get past those companies’ heightened moderation policies in the wake of FOSTA.

The Internet Archive, a nonprofit library that seeks to preserve digital materials, faces increased risk because FOSTA has dramatically increased the possibility that a prosecutor or private citizen might sue it simply for archiving newly illegal web pages. Such a lawsuit would be a real threat for the Archive, which is the Internet’s largest digital library.

FOSTA puts Human Rights Watch in danger, as well. Because the organization advocates for the decriminalization of sex work, they could easily face prosecution for “promoting” prostitution.

Where the Legal Fight Against FOSTA Stands Now

With the case now back in district court after the D.C. Circuit Court of Appeals reversed the lower court’s decision to dismiss the suit, both sides have filed motions for summary judgment. In their filings, the plaintiffs make several arguments for why FOSTA is unconstitutional.

First, they argue that FOSTA is vague and overbroad. The Supreme Court has said that if a law “fails to give ordinary people fair notice of the conduct it prohibits,” it is unconstitutional. That is especially true when the vagueness of the law raises special First Amendment concerns.

FOSTA does just that. The law makes it illegal to “facilitate” or “promote” prostitution without defining what that means. This has led to, and will continue to lead to, the censorship of speech that is protected by the First Amendment. Organizations like Woodhull, and individuals like Andrews, are already curbing their own speech. They fear their advocacy on behalf of sex workers may constitute “promotion” or “facilitation” of prostitution.

The government argues that the likelihood of anyone misconstruing these words is remote. But some courts interpret “facilitate” to simply mean make something easier. By this logic, anything that plaintiffs like Andrews or Woodhull do to make sex work safer, or make sex workers’ lives easier, could be considered illegal under FOSTA.

Second, the plaintiffs argue that FOSTA’s Section 230 carveouts violate the First Amendment. A provision of FOSTA eliminates some Section 230 immunity for intermediaries on the Web, which means anybody who hosts a blog where third parties can comment, or any company like Craigslist or Reddit, can be held liable for what other people say.

As the plaintiffs show, all the removal of Section 230 immunity really does is squelch free speech. Without the assurance that a host won’t be sued for what a commentator or poster says, those hosts simply won’t allow others to express their opinions. As discussed above, this is precisely what happened once FOSTA passed.

Third, the plaintiffs argued that FOSTA is not narrowly tailored to the government’s interest in stopping sex trafficking. Government lawyers say that Congress passed FOSTA because it was concerned about sex trafficking. The intent was to roll back Section 230 in order to make it easier for victims of trafficking to sue certain websites, such as Backpage.com. The plaintiffs agree with Congress that there is a strong public interest in stopping sex trafficking. But FOSTA doesn’t accomplish those goals—and instead, it sweeps up a host of speech and advocacy protected by the First Amendment.

There’s no evidence the law has reduced sex trafficking. The effect of FOSTA is that traffickers who once posted to legitimate online platforms will go even deeper underground—and law enforcement will have to look harder to find them and combat their illegal activity.

Finally, FOSTA violates the Constitution’s prohibition on criminalizing past conduct that was not previously illegal. It’s what is known as an “ex post facto” law. FOSTA creates new retroactive liability for conduct that occurred before Congress passed the law. During the debate over the bill, the U.S. Department of Justice even admitted this problem to Congress—but the DOJ later promised to “pursu[e] only newly prosecutable criminal conduct that takes place after the bill is enacted.” The government, in essence, is saying to the courts, “We promise to do what we say the law means, not what the law clearly says.” But the Department of Justice cannot control the actions of thousands of local and state prosecutors—much less private citizens who sue under FOSTA based on conduct that occurred long before it became law.

* * *

FOSTA sets out to tackle the genuine problem of sex trafficking. Unfortunately, the way the law is written achieves the opposite effect: it makes it harder for law enforcement to actually locate victims, and it punishes organizations and individuals doing important work. In the process, it does irreparable harm to the freedom of speech guaranteed by the First Amendment. FOSTA silences diverse viewpoints, makes the Internet less open, and makes critics and advocates more circumspect. The Internet should remain a place where robust debate occurs, without the fear of lawsuits or jail time.

Related Cases: Woodhull Freedom Foundation et al. v. United States

EFF Joins Coalition Urging Senators to Reject the EARN IT Act

EFF - Wed, 09/16/2020 - 6:59pm

Recently, EFF joined the Center for Democracy and Technology (CDT) and 26 other organizations to send a letter to the Senate opposing the EARN IT Act (S. 3398), asking that the Senate oppose fast tracking the bill, and to vote NO on passage of the bill.

As we have written many times before, if passed, the EARN IT Act would threaten free expression, harm innovation, and jeopardize important security protocols. We were pleased to join with other organizations that share our concerns about this harmful bill.

We sent our letter, but your Senators need to hear from you. Contact your Senators and tell them to oppose the EARN IT Act.

 Take Action

What the *, Nintendo? This in-game censorship is * terrible.

EFF - Wed, 09/16/2020 - 6:10pm

While many are staying at home and escaping into virtual worlds, it's natural to discuss what's going on in the physical world. But Nintendo is shutting down those conversations with its latest Switch system update (Sep. 14, 2020) by adding new terms like COVID, coronavirus and ACAB to its censorship list for usernames, in-game messages, and search terms for in-game custom designs (but not the designs themselves).

While we understand the urge to prevent abuse and misinformation about COVID-19, censoring certain strings of characters is a blunderbuss approach unlikely to substantially improve the conversation. As an initial matter, it is easily circumvented: while our testing, shown above, confirmed that while Nintendo censored coronavirus, COVID and ACAB, it does not restrict substitutes like c0vid or a.c.a.b., nor corona and virus, when written individually.

More importantly, it’s a bad idea, because these terms can be part of important conversations about politics or public health. Video games are not just for gaming and escapism, but are part of the fabric of our lives as a platform for political speech and expression.  As the world went into pandemic lockdown, Hong Kong democracy activists took to Nintendo’s hit Animal Crossing to keep their pro-democracy protest going online (and Animal Crossing was banned in China shortly after). Just as many Black Lives Matter protests took to the streets, other protesters voiced their support in-game.  Earlier this month, the Biden campaign introduced Animal Crossing yard signs which other players can download and place in front of their in-game home. EFF is part of this too—you can show your support for EFF with in-game hoodies and hats. 

Nevertheless, Nintendo seems uncomfortable with political speech on its platform. The Japanese Terms of Use prohibit in-game “political advocacy” (政治的な主張 or seijitekina shuchou), which led to a candidate for Japan’s Prime Minister canceling an in-game campaign event. But it has not expanded this blanket ban to the Terms for Nintendo of America or Nintendo of Europe.

Nintendo has the right to host the platform as it sees fit. But just because they can do this, doesn’t mean they should. Nintendo needs to also recognize that it has provided a platform for political and social expression, and allow people to use words that are part of important conversations about our world, whether about the pandemic, protests against police violence, or democracy in Hong Kong.

Trump’s Ban on TikTok Violates First Amendment by Eliminating Unique Platform for Political Speech, Activism of Millions of Users, EFF Tells Court

EFF - Mon, 09/14/2020 - 7:57pm

We filed a friend-of-the-court brief—primarily written by the First Amendment Clinic at the Sandra Day O’Connor College of Law—in support of a TikTok employee who is challenging President Donald Trump’s ban on TikTok and was seeking a temporary restraining order (TRO). The employee contends that Trump's executive order infringes the Fifth Amendment rights of TikTok's U.S.-based employees. Our brief, which is joined by two prominent TikTok users, urges the court to consider the First Amendment rights of millions of TikTok users when it evaluates the plaintiff’s claims.

Notwithstanding its simple premise, TikTok has grown to have an important influence in American political discourse and organizing. Unlike other platforms, users on TikTok do not need to “follow” other users to see what they post. TikTok thus uniquely allows its users to reach wide and diverse audiences. That’s why the two TikTok users who joined our brief use the platform. Lillith Ashworth, whose critiques of Democratic presidential candidates went viral last year, uses TikTok to talk about U.S. politics and geopolitics. The other user, Jynx, maintains an 18+ adult-only account, where they post content that centers on radical leftist liberation, feminism, and decolonial politics, as well as the labor rights of exotic dancers.

Our brief argues that in evaluating the plaintiff’s claims, the court must consider the ban’s First Amendment implications. The Supreme Court has established that rights set forth in the Bill of Rights work together; as a result the plaintiff's Fifth Amendment claims are enhanced by the First Amendment considerations. We say in our brief:

A ban on TikTok violates fundamental First Amendment principles by eliminating a specific type of speaking, the unique expression of a TikTok user communicating with others through that platform, without sufficient considerations for the users’ speech. Even though the order facially targets the platform, its censorial effects are felt most directly by the users, and thus their First Amendment rights must be considered in analyzing its legality.

EFF, the First Amendment Clinic, and the individual amici urge the court to adopt a higher standard of scrutiny when reviewing the plaintiff’s claims against the president. Not only are the plaintiff’s Fifth Amendment liberties at stake, but millions of TikTok users have First Amendment freedoms at stake. The Fifth Amendment and the First Amendment are each critical in securing life, liberty, and due process of law. When these amendments are examined separately, they each deserve careful analysis; but when the interests protected by these amendments come together, a court should apply an even higher standard of scrutiny.

The hearing on the TRO scheduled for tomorrow was canceled after the government promised the court that it did not intend to include the payment of wages and salaries within the executive order's definition of prohibited transactions, thus addressing the plaintiff's most urgent claims.

 

 

 

 

Things to Know Before Your Neighborhood Installs an Automated License Plate Reader

EFF - Mon, 09/14/2020 - 3:38pm

Every week EFF receives emails from members of homeowner’s associations wondering if their Homeowner’s Association (HOA) or Neighborhood Association is making a smart choice by installing automated license plate readers (ALPRs). Local groups often turn to license plate readers thinking that they will protect their community from crime. But the truth is, these cameras—which record every license plate coming in and out of the neighborhood—may create more problems than they solve. 

The False Promise of ALPRs

Some members of a community think that, whether they’ve experienced crime in their neighborhood or not, a neighborhood needs increased surveillance in order to be safe. This is part of a larger nationwide trend that shows that people’s fear of crime is incredibly high and getting higher, despite the fact that crime rates in the United States are low by historical standards. 

People imagine that if a crime is committed, an association member can hand over to police the license plate numbers of everyone that drove past a camera around the time the crime is believed to have been committed. But this will lead to innocent people becoming suspects because they happened to drive through a specific neighborhood. For some communities, this might mean hundreds of cars end up under suspicion. 

Also, despite what ALPR vendors like Flock Safety and Vigilant Solutions claim, there is no real evidence that ALPRs reduce crime. ALPR vendors, like other surveillance salespeople, operate on the assumption that surveillance will reduce  crime by either making would-be criminals aware of the surveillance in hopes it will be a deterrent, or by using the technology to secure convictions of people that have allegedly committed crimes in the neighborhood. However, there is little empirical evidence that such surveillance reduces crime. 

Like all machines, ALPRs make mistakes

ALPRs do, however, present a host of other potential problems for people who live, work, or commute in a surveilled area. 

The Danger ALPRs Present To Your Neighborhood

ALPRs are billed as neighborhood watch tools that allow a community to record which cars enter and leave, and when. They essentially turn any neighborhood into a gated community by casting suspicion on everyone who comes and goes. And some of these ALPR systems (including Flock’s) can be programmed to allow all neighbors to have access to the records of vehicle comings and goings. But driving through a neighborhood should not lead to suspicion. There are thousands of reasons why a person might be passing through a community, but ALPRs allow anyone in the neighborhood to decide who belongs and who doesn’t. Whatever motivates that individual - racial biases, frustration with another neighbor, even disagreements among family members - could all be used in conjunction with ALPR records to implicate someone in a crime, or in any variety of other legal-but-uncomfortable situations. 

The fact that your car passes a certain stop sign at a particular time of day may not seem like invasive information. But you can actually tell a lot of personal information about a person by learning their daily routines—and when they deviate from those routines. If a person’s car stops leaving in the morning, a nosy neighbor at the neighborhood association could infer that they may have lost their job. If a married couple’s cars are never at the house at the same time, neighbors could infer relationship discord. These ALPR cameras also give law enforcement the ability to learn the comings and goings of every car, effectively making it impossible for drivers to protect their privacy. 

These dangers are only made worse by the broad dissemination of this sensitive information. It goes not just to neighbors, but also to Flock employees, and even your local police. It might also go to hundreds of other police departments around the country through Flock’s new and aptly-named TALON program, which links ALPRs around the country. 

ALPR Devices Lack Oversight

HOAs and Neighborhood Associations are rarely equipped or trained to make responsible decisions when it comes to invasive surveillance technology. After all, these people are not bound by the oversight that sometimes accompanies government use of technology--they’re your neighbors. While police are subject to legally-binding privacy rules (like the Fourth Amendment), HOA members are not. Neighbors could, for instance, use ALPRs to see when a neighbor comes home from work every day. They could see if a house has a regular visitor and what time that person arrives and leaves. In San Antonio, one HOA member was asked what they could do to prevent someone with access to the technology from obsessively following the movements of specific neighbors. He had never considered that possibility: "Asked whether board members had established rules to keep track of who searches for what and how often, Cronenberger said it hadn’t dawned on her that someone might use the system to track her neighbors’ movements.” 

Machine Error Endangers Black Lives

Like all machines, ALPRs make mistakes. And these mistakes can endanger people’s lives and physical safety. For example, an ALPR might erroneously conclude that a passing car’s license plate matches the plate of a car on a hotlist of stolen cars. This can lead police to stop the car and detain the motorists. As we know, these encounters can turn violent or even deadly, especially if those cars misidentified are being driven by Black motorists. 

This isn’t a hypothetical scenario. Just last month, a false alert from an ALPR led police to stop a Black family, point guns at them, and force them to lie on their bellies in a parking lot—including their children, aged six and eight. Tragically, this is not the first time that police have aimed a gun at a Black motorist because of a false ALPR hit.

Automated License Plate Reader Abuses by Police Foreshadow Abuses by Neighborhoods 

Though police have used these tools for decades, communities have only recently had the ability to install their own ALPR systems. In that time, EFF and many others have criticized both ALPR vendors and law enforcement for their egregious abuses of the data collected. 

Police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

A February 2020 California State Auditor’s report on four jurisdictions’ use of this tech raised several significant concerns. The data collected is primarily not related to individuals suspected of crimes. Many agencies did not implement privacy-protective oversight measures, despite laws requiring it. Several agencies did not have documented usage or retention policies. Many agencies lack guarantees that the stored data is appropriately secure. Several agencies did not adequately confirm that entities they shared data with had a right to receive that information. And many did not have appropriate safeguards for users accessing the data. 

California agencies aren’t unique: a state audit in Vermont found that 11% of ALPR searches violated state restrictions on when cops can and can't look at the data. Simply put: police abuse this technology regularly. And unfortunately, neighborhood users will likely do the same. 

In fact, the growing ease with which this data can be shared is only increasing. Vigilant Solutions, a popular vendor for police ALPR tech, shares this data between thousands of departments via its LEARN database. Flock, a vendor that aims to offer this technology to neighborhoods, has just announced a new nationwide partnership that allows communities to share footage and data with law enforcement anywhere in the country, vastly expanding its reach. While Flock does include several safeguards that Vigilant Solutions does not, such as encrypted video and 30-day deletion policies, many potential abuses remain.

Additionally, some ALPR systems can automatically flag cars that don’t look a certain way—from rusted vehicles to cars with dents or poor paint jobs—endangering anyone who might not feel the need (or have the income required) to keep their car in perfect shape. These “vehicle fingerprints” might flag, not just a particular license plate, but “a blue Honda CRV with damage on the passenger side door and a GA license plate from Fulton County.” Rather than monitoring specific vehicles that come in and out of a neighborhood via their license plate, “vehicle fingerprint” features could create a trouble drag-net style of monitoring. Just because a person is driving a damaged car from an accident, or a long winter has left a person’s car rusty, does not mean they are worthy of suspicion or undue police or community harassment.  

Some ALPRs are even designed to search for certain bumper stickers, which could reveal information on the political or social views of the driver. While they aren’t in every ALPR system, and some are just planned, all of these features taken together increase the potential for abuse far beyond the dangers of collecting license plate numbers alone. 

What You Can Tell Your Neighbors if You’re Concerned 

Unfortunately, ALPR devices are not the first piece of technology to exploit irrational fear of crime in order to expand police surveillance and spy on neighbors and passersby. Amazon’s surveillance doorbell Ring currently has over 1,300 partnerships with individual police departments, which allow departments to directly request footage from an individual’s personal surveillance camera without presenting a warrant. ALPRs are at least as dangerous: they track our comings and goings; the data can indicate common travel patterns (or unique ones); and because license plates are required by law, there is no obvious way to protect yourself.

If your neighborhood is considering this technology, you have options. Remind your neighbors that it collects data on anyone, regardless of suspicion. They may think that only people with something to hide need to worry—but hide what? And from who? You may not want your neighbor knowing what time you leave your neighborhood in the morning and get back at night. You may also not want the police to know who visits your home and for how long. While the intention is to protect the neighborhood from crime, introducing this kind of surveillance may also end up incriminating your neighbors and friends for reasons you know nothing about. 

You can also point out that ALPRs have not been shown to reduce crime. Likewise, consider sending around the California State Auditor’s report on abuses by law enforcement. And if the technology is installed, you can (and should) limit the amount of data that’s shared with police, automatically or manually. Remind people of the type of information ALPRs collect and what your neighbors can infer about your private life. 

If you drive a car, you’re likely being tracked by ALPRs, at least sometimes. But that doesn’t mean your neighborhood should contribute to the surveillance state. Everyone ought to have a right to pass through a community without being tracked, and without accidentally revealing personal details about how they spend their day. Automatic license plate readers installed in neighborhoods are a step in the wrong direction. 

Pages