Electronic Freedom Foundation
In Foreshadowing Cryptocurrency Regulations, U.S. Treasury Secretary Prioritizes Law Enforcement Concerns
U.S. Treasury Secretary Steven Mnuchin foreshadowed the Trump administration’s plans for greater surveillance of cryptocurrency users during his testimony before the Senate Finance Committee on Wednesday. He noted that cryptocurrency was a “crucial area” for the Treasury Department to examine, and said:
We are working with FinCEN and we will be rolling out new regulations to be very clear on greater transparency so that law enforcement can see where the money is going and that this isn’t used for money laundering.
While we haven’t seen any draft proposals at this point, Mnuchin’s call for greater transparency—a euphemism for intrusive surveillance— definitely got our attention. One of the real risks of cryptocurrency is that it could become a technology of financial surveillance, especially in the case of open ledger protocols such as Bitcoin. These are cryptocurrencies that create unerasable, public lists of transactions that, should a pseudonymous wallet ever be associated with an individual person, can potentially link together a huge number of financial transactions. And those transactions can be deeply revealing, pointing to everything from your friend network to your sexual interests to your political affiliations. Indeed, researchers have already proven that this is not a theoretical risk.
Law enforcement has long been interested in having access to financial records during investigations. But law enforcement access isn’t the only value, let alone the most important value, regulators need to solve for. There are already laws that regulate financial institutions that hold and exchange funds (including cryptocurrencies) for consumers, and offer pathways for law enforcement seeking to learn about transactions. Before adopting new regulations in this space, it’s vital that the U.S. Treasury and others first ask:
- Do existing laws and regulations already provide access for law enforcement? If not, where specifically are there shortcomings, and how significant of an issue is that?
- How will any future proposals ensure that privacy and freedom of association are preserved?
- How might proposed regulations impact privacy-enhancing blockchain projects that are under development?
EFF has been watching the evolution of privacy-enhancing technologies such as privacy coins and decentralized exchanges. While not widely used today, these technologies hold the potential to bring some of the privacy-preserving attributes all of us already enjoy with cash into the digital world. It’s vital that future regulation doesn’t threaten these innovations before they’ve had a chance to find a foothold.
We appreciate that some federal officials are highlighting a need for privacy when commenting on cryptocurrency policy. Earlier this week, Federal Reserve Chairman Jerome Powell spoke to the House of Representatives Committee on Financial Services . When asked whether the Fed had visibility into China’s efforts to develop a state-run digital currency, Powell noted that China is “in a completely different institutional context. For example, the idea of having a ledger where you know everybody’s payments, that’s not something that would be particularly attractive in the United States context. It’s not a problem in China.”
While we don’t share Powell’s impression that financial surveillance in China is “not a problem,” we appreciate that he understands that Americans won’t stand for a system of complete financial surveillance.
As regulators and lawmakers consider the challenges we face around cryptocurrencies, we urge them to work closely with the human rights and civil liberties communities. We are concerned about a future in which every financial transaction is captured and saved forever, creating a honey pot for malicious hackers and law enforcement as well as a chilling government shadow over every purchase or donation. For thousands of years, societies around the world have thrived with privacy-preserving cash. Rather than be the generation that kills cash, we encourage the Treasury Department and other federal officials to think about ways we can bring the best attributes of cash into our digital future.
California Auditor Releases Damning Report About Law Enforcement’s Use of Automated License Plate Readers
California police and sheriffs are failing to protect the privacy of drivers on city streets, the California State Auditor’s office determined after a seven-month investigation into the use of automated license plate readers (ALPRs) by the Los Angeles Police Department and three other local law enforcement agencies. California State Senator Scott Wiener sponsored the State Auditor’s report.
The auditor raised a long list of concerns, including fundamental problems with police ALPR policies, failure to conduct audits, and the risk of ALPR data being abused to surveil political rallies or target immigrant populations. In addition to Los Angeles, the auditor investigated the Fresno Police Department, Sacramento County Sheriff’s Office, and Marin County Sheriff’s Office. The auditor indicated that the problems are likely prevalent across 230 California law enforcement agencies using ALPRs.
The report is a damning assessment of how California law enforcement agencies use this mass-surveillance technology, which employs computer-controlled, high-speed cameras mounted on street lights, on top of police cars, or speed-monitoring trailers that automatically capture images of every vehicle that drives by, without drivers’ knowledge or permission. The cameras capture the exact time and place a license plate was seen, and often compares that data point to hot lists of “people of interest” to police. The cameras are capable of capturing millions of data points, which, taken in the aggregate, can paint an intimate portrait of a driver’s life and even chill First Amendment protected activity.
Among the auditor’s key findings:
- The four agencies’ “[p]oorly developed and incomplete policies contributed to the agencies’ failure to implement ALPR programs that reflect the privacy principles” required by state law.
- “Instead of ensuring that only authorized users access their ALPR data for appropriate purposes, the agencies we reviewed have made abuse possible by neglecting to institute sufficient monitoring.”
- “A member of law enforcement could misuse ALPR images to stalk an individual or observe vehicles at particular locations and events, such as doctors’ offices or clinics and political rallies. Despite these risks, the agencies we reviewed conduct little to no auditing of users’ searches.”
“[A]gencies have not based their decisions regarding how long to retain their ALPR images on the documented usefulness of those images to investigators, and they may be retaining the images longer than necessary, increasing the risk to individuals’ privacy.”
- “We found that the three agencies storing ALPR data in Vigilant’s cloud—Fresno, Marin, and Sacramento—do not have sufficient data security safeguards in their contracts.”
- While the California Department of Justice has guidelines on protecting police data from federal immigration enforcement activities, “agencies were either unaware of these guidelines or had not implemented them for their ALPR systems.”
ALPRs allow police to upload our movements to massive databases.That enables them and others to analyze our travel patterns, identify visitors to specific locations, and receive real-time alerts on specific cars through “hot lists.” For more than eight years, EFF has raised concerns about the technology because it can reveal sensitive information about all drivers, regardless of whether they are suspected of connection to criminal activity. In 2015, EFF supported legislation—S.B. 34—to require agencies to implement policies that protect civil liberties and privacy, and to maintain a detailed log of every time someone accesses ALPR data.
Last summer, EFF worked with Sen. Wiener to request the California State Auditor to conduct an audit of ALPRs used by law enforcement agencies throughout the state and to assess whether agencies are complying with S.B. 34. Based on our own research, we demonstrated to legislators how police, sheriffs, and other agencies were routinely flaunting a state law requiring transparency and accountability designed to protect the public’s privacy and civil liberties. This included failure to keep track of system searches and publish their usage policies online. We also drew attention to the fact that agencies are collecting far more information than they need and are sharing the data indiscriminately with hundreds other agencies across the country.
Over the last seven months, the state auditor’s staff spent 2,800 hours surveying every police and sheriff’s department in the state and conducting deep-dives of four agencies.
The auditor made many recommendations. The California legislature should amend state law to require the California Justice Department to draft and publish a template for ALPR policies that local law enforcement agencies can use as a model. The Justice Department should also issue guidelines about security requirements agencies should follow to protect ALPR data. The amended law must establish rules about how long ALPR data can be retained, including data that is used to link persons of interest with license plate images. Periodic evaluations of data retention policies should be conducted to ensure that police are keeping data for the shortest time as practical, the auditor said. The four law enforcement agencies investigated should review and revise their ALPR policies and do an assessment of their data security practices by August.
The auditor found that 230 agencies in California were deploying ALPRs, with the majority using Vigilant Solutions to collect, store, and analyze the data. An additional 36 agencies planned to deploy ALPRs in the near future. This is the first time that a comprehensive list of ALPR users in California has been compiled.
Of the agencies that use ALPR systems, 96 percent said that they had ALPR policies. But, as the audit reveals, many of the policies that agencies have in place are insufficient. The agencies audited in Fresno, Marin, and Sacramento did not cover who has access to the system, or how the agency monitors use of the system to comply with privacy laws. The Los Angeles Police Department, which has an ALPR database of 320 million images, had no policy specifically dealing with its ALPR systems at all. The department said that its existing information technology policies adequately covered the system—a belief that the auditor’s office did not share.
The audit also reveals that the four departments it examined closely are sharing data from ALPR systems broadly—between those agencies, ALPR data are shared with entities in 44 out of 50 states. The Sacramento County’s Sheriff’s Office alone shares with more than 1,000 entities.
Agencies also retain data for varying periods of time, some very long. The Los Angeles Police Department, for example, keeps information from ALPR systems for five years. Information contained in these systems, the audit found, include not only license plate information but also personal information such as name, address, date of birth, and even some criminal justice data such as arrest record.
All four agencies, the report found, lacked sufficient safeguards for managing which employees can see the data from their systems, such as a policy that requires supervisory approval for establishing an account to access an ALPR system.
Problems likely extend far beyond the four agencies profiled. “We are concerned that the policy deficiencies we found are not limited to the agencies we reviewed, and thus law enforcement agencies of all types may benefit from guidance to improve their policies,” the report said.
Based on the outcome of the audit, EFF asks the state legislature to enact new safeguards to limit how police use this technology.
- Agencies must create and define a clear process for adding and removing vehicles from hot lists, and limit hot lists to only the most serious of crimes.
- Agencies must not retain any data about vehicles that do match a vehicle included on such a hotlist.
- Agencies should not use ALPR data collected or stored by a private company.
- Agencies should follow the state of Maryland’s lead and disclose on an annual basis how many fixed and mobile license plate readers they have, how many license plates they scanned, how many of those scans matched a hot list, how many times agencies outside the state requested and received data, the results of all audits, and any breaches of the system that occurred.
- Forbid the use of predictive algorithms based on ALPR data.
- Place a statewide moratorium on government use of ALPRs, similar to the moratorium on mobile face recognition passed last session.
ALPR technology can be used to target drivers who visit sensitive places such as health centers, immigration clinics, gun shops, union halls, protests, or centers of religious worship—all without drivers’ permission or knowledge. This report clearly demonstrates the ways that these agencies are failing California drivers—and raises serious questions about how entities across the country use and access ALPR data.
The Digital Millennium Copyright Act (DMCA) is one of the most important laws affecting the Internet and technology. Without the DMCA’s safe harbors from crippling copyright liability, many of the services on which we rely, big and small, commercial and noncommercial, would not exist. That means Youtube, but also Wikipedia, Etsy, and your neighborhood blog. At the same time, the DMCA has encouraged private censorship and hampered privacy, security, and competition.
The DMCA is 22 years old this year and the Senate Subcommittee on Intellectual Property is marking that occasion with a series of hearings reviewing the law and inviting ideas for “reform.” It launched this week with a hearing on “The Digital Millennium Copyright Act at 22: What is it, why was it enacted, and where are we now,” which laid out the broad strokes of the DMCA’s history and current status. In EFF’s letter to the Committee, we explained that Section 1201 of the DMCA has no redeeming value. It has caused a lot of damage to speech, competition, innovation, and fair use. However, the safe harbors of Section 512 of the DMCA have allowed the Internet to be an open and free platform for lawful speech.
This hearing had two panels. The first featured four panelists who were involved in the creation of the DMCA 22 years ago. The second panel featured four law professors talking about the current state of the law. A theme emerged early in the first panel and continued in the second: the conversation about the DMCA should not focus on whether it is and is not working for companies, be they Internet platforms, major labels and studios, or even, say, car manufacturers. Users—be they artists, musicians, satirists, parents who want to share videos of their kids, nonprofits trying to make change, repair shops or researchers—need a place and a voice.
The intent of the DMCA 22 years ago was to discourage copyright infringement but create space for innovation and expression, for individuals as well as Hollywood and service providers. Over the course of the last two decades, however, many have forgotten who is supposed to occupy that space. As we revisit this law over the course of many hearings this year, we need to remember that this is not “Big Content v Big Tech” and ensure that users take center stage. Thankfully, at least at this hearing, there were people reminding Congress of this fact.Section 512: Enabling Online Creativity and Expression
The DMCA has two main sections. The first is Section 512, which lays out the "safe harbor" provisions that protect service providers who meet certain conditions from monetary damages for the infringing activities of their users and other third parties on the net. Those conditions include a notice and takedown process that gives copyright holders an easy way to get content taken offline and, in theory, gives users redress if their content is wrongfully targeted. Without the safe harbor, the risk of potential copyright liability would prevent many services from doing things like hosting and transmitting user-generated content. Thus the safe harbors, while imperfect, have been essential to the growth of the Internet as an engine for innovation and free expression.
In the second part of the hearing, Professor Rebecca Tushnet, a Harvard law professor and former board member of the Organization for Transformative Works (OTW), stressed how much that the safe harbor has done for creativity online. Tushnet pointed out that OTW runs the Archive of Our Own, home to over four million works from over one million users and which receives over one billion page views a month. And yet, the number of DMCA notices averages to less than one per month. Most notices they receive are invalid, but the volunteer lawyers for OTW must still spend time and expense to investigate and respond. This process—small numbers of notices and individual responses—is how most services experience the DMCA. A few players—the biggest players—however, rely on automated systems to parse large numbers of complaints. “It’s important,” said Tushnet, “not to treat YouTube like it was the Internet. If we do that, the only service to survive will be YouTube.”
We agree. Almost everything you use online relies in some way on the safe harbor provided by section 512 of the DMCA. Restructuring the DMCA around the experiences of the largest players like YouTube and Facebook will hurt users, many of which would like more options rather than fewer.
“The system is by no means perfect, there remain persistent problems with invalid takedown notices used to extort real creators or suppress political speech, but like democracy, it’s better than most of the alternatives that have been tried,” said Tushnet. “The numbers of independent creators and the amount of money spent on content is growing every year. Changes to 512 are likely to make things even worse.”Section 1201: Copyright Protection Gone Horribly Wrong
On the other hand, the DMCA also includes Section 1201, the "anti-circumvention" provisions that bar circumvention of access controls and technical protection measures, i.e. digital locks on software. It was supposed to prevent copyright "pirates" from defeating things like digital rights management (DRM is a form of access control) or building devices that would allow others to do so. In practice, the DMCA anti-circumvention provisions have done little to stop "Internet piracy.” Instead, they’ve been a major roadblock to security research, fair use, and repair and tinkering.
Users don’t experience Section 1201 as a copyright protection. They experience it as the reason they can’t fix their tractor, repair their car, or even buy cheaper printer ink. And attempts to get exemptions to this law for these purposes—which, again, are unrelated to copyright infringement and create absurd conditions for users trying to use things they own—are always met with resistance.
Professor Jessica Litman, of University of Michigan Law School, laid out the problem of 1201 clearly:
The business that make products with embedded software have used the anti-circumvention provisions to discourage the marketing of compatible after-market parts or hobble independent repair and maintenance businesses. Customers who would prefer to repair their broken products rather than discard and replace them face legal obstacles they should not. It’s unreasonable to tell the owner of a tractor that if her tractor needs repairs, she ought to petition the Librarian of Congress for permission to make those repairs.
1201 covers pretty much anything that has a computer in it. In 1998, that meant a DVD; in 2020, it means the Internet of Things, from TVs to refrigerators to e-books to tractors. Which is why farmers trying to repair, modify or test those things—farmers, independent mechanics, security researchers, people making ebooks accessible to those with print disabilities, and so on—either have to abandon their work, risk being in violation of the law (including criminal liability), or ask the Library of Congress for an exemption to the law every three years.
Put simply, the existing scheme doesn’t discourage piracy. Instead, it prevents people from truly owning their own devices; and, as Litman put it, “prevents licensed users from making licensed uses.”
The DMCA is a mixed bag. Section 512’s safe harbor makes expression online possible, but the specific particulars of the system have failures. And section 1201 has strayed far away from whatever its original purpose was and hurts users far more than it helps rightsholders. To see why, the DMCA hearings must include testimony focused needs and experiences of all kinds of users and services.
San Francisco – The Electronic Frontier Foundation (EFF) is asking a state court to protect the identity of an anonymous Glassdoor commenter who is being targeted by their former employer. EFF filed a motion to quash a subpoena for identifying information of its client after the cryptocurrency exchange company known as Kraken filed suit against several anonymous reviewers seeking to identify them based upon a claim that they breached their severance agreements.
Glassdoor is a popular website where people share opinions of their current and former workplaces. After Kraken laid off several employees, people left anonymous reviews about the company on Glassdoor. EFF’s client, J. Doe, shared their views on working for Kraken, which ranged from praising the “skilled, knowledgeable, and nice colleagues” to a personal reflection that “I personally had a deep sense of trepidation much of the time.” Doe took care writing the review, as Doe had signed a severance agreement promising not to disclose confidential information or disparage or defame the company. Kraken publicly responded to Doe’s review of the company on the Glassdoor site, thanking Doe for the feedback and wishing Doe the best.
However, in May of last year, Kraken changed course and began targeting Doe and other former employees. The company filed a lawsuit against ten Doe defendants, including our client, claiming they breached their severance contracts and seeking to identify them. Kraken also sent an email to former employees, demanding that they delete any reviews that were in violation of the severance agreement. Even though Doe believed they had complied with the agreement, Doe deleted the Glassdoor review.
“This litigation is designed to harass and silence current and former Kraken employees for speaking about their experiences at the company,” said EFF Staff Attorney Aaron Mackey. “Kraken’s efforts to unmask and sue its former employees discourages everyone from talking about their work and demonstrates why California courts must robustly protect anonymous speakers’ First Amendment rights.”
In the motion filed Tuesday in Superior Court in Marin County, California, EFF explains that courts have long recognized that attempts to unmask anonymous speakers not only harm the speakers’ First Amendment rights, but can chill the speech of others who may want to do the same but fear being unmasked. The motion asks the court to adopt stronger legal protections for Doe and other anonymous speakers, which require more than a mere allegation of illegal activity before the anonymity is breached.
“Kraken cannot show that Doe’s review was defamatory or otherwise unprotected by the Constitution, so it instead seeks to leverage its contract claims to identify, and potentially retaliate against, Doe,” said EFF Frank Stanton Legal Fellow Naomi Gilens. “Given Kraken’s tactics, we are asking the court to embrace stronger First Amendment protections for Doe and anyone else who is targeted for speaking out.”email@example.com NaomiGilensFrank Stanton Fellownaomi@eff.org
The further the "Right to be Forgotten" (RTBF) online progresses from its original creation by Europe's Court of Justice, the broader and more damaging its ramifications seem to be. The latest attempt to insert it is a rushed proposal in Uruguay. The complaints of multiple digital rights groups across Latin America show how unwise the Uruguayan proposal is—and how vexing efforts to adopt the right can quickly become.
The Court of Justice of the European Union (CJEU) started the current rash of RTBF proposals in 2014 by injecting the spirit of the "droit a l'oubli" laws of some of its member countries into pre-existing, European Union-wide, data protection law. The court's decision in the case was aimed at Google’s search results, though it applied to all search engines, and required them to de-index web pages from search results at the request of individuals when those pages contained personal information that was "out-of-date, inaccurate, or irrelevant.”
The CJEU sought to address a serious problem in recognizing a Right to be Forgotten. Many jurisdictions recognize that the easy discovery of past bad acts—for example, the records of past convictions when the perpetrator has been rehabilitated—can cause lasting and disproportionate harm. But working out how to map these concerns to the modern era is a serious challenge: one that involves balancing the benefits that the CJEU recognized, with the dangers to free expression, including the dangers of obscuring facts from our online historical record, and of granting individuals a general right to control how conversations about them are conducted online. No matter how carefully the Right to be Forgotten might be defined, conflicting principles of due process and free expression, inevitably render it a thorny and contested task.
Since the CJEU's ruling, we've seen some of these paradoxes play out within the EU. Can publishers be informed by search engines if their articles are suddenly removed from the index? No, says Europe’s data protection authorities, because that would violate the privacy of the individual making the request. Are the criteria by which RTBF takedowns are decided judicially or made by private actors? In Europe, it's not the courts who make the initial determination, it's the search engines. Even in its home jurisdiction, the right remains a controversial and imprecise tool.
Concerns like these also play out in South America, where memories of attempts by authoritarian regimes to re-write history are still fresh, and Right to be Forgotten remedies have been pursued by powerful figures and celebrities who seek to erase controversial pasts.
In 2014, Argentina’s Supreme Court denied model María Belén Rodríguez’s request to delete content arising from search results associated with her name. Belen Rodriguez filed a claim for damages against Google and Yahoo Argentina, arguing that the unauthorized use of her image violated her rights by linking it to online erotic content. The Supreme Court held that search engines had no proactive monitoring obligations when it came to third-party content.
In Mexico, the National Institute for the Access to Information (INAI) granted Carlos Sánchez de la Peña's request to have three links removed from Google search results. One of the pieces Google was ordered to de-index, published by Fortuna Magazine, featured the transportation magnate’s fraudulent operations and suspicious benefits his company had received from Mexico’s government. This decision was overruled after being challenged in court by the magazine, represented by the digital rights group R3D.
In Peru, investigative journalism portal Ojo Público shed light on how complaints to administrative and judicial authorities had been used to censor websites, including search engines, from revealing organized crime. “Right to be Forgotten” claims using the country's data protection law were filed by prominent figures in organized crime, a former minister of State, and an ex-president of the Supreme Court.
The risks implicated in RTBF claims were also stressed by the Colombian Constitutional Court in a ruling that rejected a citizen's de-indexing request. In a legal action against El Tiempo, the main newspaper in the country, a Colombian citizen argued that her right to a good name and privacy were violated in the publication and subsequent indexing by Google of a newspaper article, in which El Tiempo said that she participated in an alleged crime (which reached the prescription period before being ruled.) The court refused to order the search engines to de-index the content because it would constitute a form of prior control and turn the search engine into a censor of user-posted content. That, in turn, would undermine guiding principles of Internet architecture: “equal access, non-discrimination and pluralism.” However, the court required El Tiempo to update the published information and use robots.txt and metatags to prevent the indexing of the content by Google, raising equally complex free expression concerns.
The controversial nature of the Right to be Forgotten, and the challenges of creating statutes that can withstand constitutional scrutiny has, it seems, been overlooked by Uruguay’s new administration. The country’s newly controlling National Party released in late January a draft urgency law (Ley de Urgente Consideración) to be proposed once the new president takes office on March 1st. With over 300 articles, the Right to be Forgotten provision stands out. If approved, it would let anyone request that search engines de-index data and delete social media posts with their personal information, “at a simple request,” even when published by third parties. It would apply to information the person deems inadequate, inaccurate, outdated or excessive. Inadequately addressing conflicting rights and line-drawing intricacies, the proposal contains only a clause stating that "public interest should be taken into account." Urgent bills proposed by the President pass automatically if not rejected or substituted after 90 days, as per the country's constitutional rules.
Uruguay already has a strong and comprehensive data protection law. The right of erasure is a crucial component of data protection law, but it should not be a shortcut to silence legitimate speech. Moreover, speech potentially violating honor, image, and privacy are already subject to specific rules.
Uruguay's government should not presume that implementing a so-called Right to be Forgotten is simple, and drop the attempt to a rushed adoption of an ambiguous and risky law.
Washington, D.C.—The Electronic Frontier Foundation (EFF) and advocates for public interest organizations will participate tomorrow in an open discussion about a controversial plan by the nonprofit Internet Society to sell the .ORG domain registry to private equity firm Ethos Capital.
EFF and hundreds of .ORGs oppose the $1.1 billion transaction, which threatens the privacy and free speech rights of thousands of groups. ORGs operate around the world and across the political spectrum. They range from churches and social groups to nonprofits that help disaster victims, promote public health, serve the poor, and work on issues like race, immigration, women’s rights, the arts, and the environment. The sale, which will convert the registry into a for-profit entity, can raise costs for nonprofits and lead to censorship or even domain suspension under agreements with corporations or governments to monitor .ORG sites.
EFF Senior Staff Attorney Mitch Stoltz will discuss EFF’s concerns about the deal, and what it means for the future of large and small organizations whose websites depend on the .ORG registry to communicate online with clients, donors, and the public at large.
“The decision to sell the .ORG registry to a for-profit private equity firm run by domain name insiders came about after price caps on the registry were removed and changes in the .ORG Registry Agreement set new enforcement processes that can be used to censor organizations’ speech,” said Stoltz. “The NGO community has important questions about how speech rights will be protected and what mechanisms will exist to address these concerns. We look forward to having a dialogue with Internet Society and other groups about the future of Internet speech in light of this planned sale.”
Tomorrow’s panel includes Andrew Sullivan, Internet Society CEO; Benjamin Leff, American University Washington College of Law professor whose scholarship focuses on nonprofit regulations, and Marc Rotenberg, Electronic Privacy Information Center CEO and former chair of the Public Interest Registry (PIR). Kathryn Kleiman, former director of policy for PIR who is now practitioner-in-residence at American University Washington College of Law’s Intellectual Property & Tech Law Clinic, will facilitate the panel discussion.
EFF Senior Staff Attorney Mitch Stoltz
Fireside Chat—The Controversial Sale of the .ORG Registry: The Conversation We Should Be Having
American University Washington College of Law
4300 Nebraska Ave NW
Ceremonial Classroom, Warren Building Terrace Level
Washington DC 20016
Tuesday, Feb. 11
12:00 pm – 2:00 pm
Registration (in person):
For information about SAVE.ORG:
For a coalition letter to ICANN about the sale of .ORG:
San Francisco – An Open Source advocate and blogger criticizing a company, Open Source Security Inc. (OSS), has successfully defended a defamation lawsuit after an appeals court found that the company’s accusations against the blogger were baseless.
The Electronic Frontier Foundation (EFF) represented Bruce Perens, a founder of the Open Source movement, in this appeal. The case started in 2017, when Perens posted a blog entry denouncing restraints that OSS placed on customers of its security patch software for the Linux Operating System. OSS sued Perens in response, despite the fact that Perens was merely exercising his First Amendment right to express his opinion about a matter of public concern to the worldwide Open Source community.
A lower court found that OSS’s lawsuit failed to plausibly state a claim for defamation and that Perens was entitled to fees under a California statute that protects defendants from SLAPPs, short for Strategic Lawsuits Against Public Participation. SLAPPs are malicious lawsuits that aim to silence critics by forcing victims to spend time and money to fight the lawsuit itself, draining their resources and chilling their right to free speech.
OSS appealed the decision to the United States Court of Appeals for the Ninth Circuit. Thursday, a panel of judges ruled that the lower court decision should stand.
“Bruce Perens merely expressed his opinion about a controversial issue of concern to many Open Source fans all over the world,” said EFF Staff Attorney Jamie Williams, who argued the case in front of the Ninth Circuit panel last month. “OSS is free to publicize its disagreement with our client, but as the Ninth Circuit’s decision confirms, it cannot sue him just because he said something the company didn’t like.”
For the full opinion in OSS v. Perens:
Thanks to the adoption of a disastrous new Copyright Directive, the European Union is about to require its member states to pass laws requiring online service providers to ensure the unavailability of copyright-protected works. This will likely result in the use of copyright filters that automatically assess user-submitted audio, text, video and still images for potential infringement. The Directive does include certain safeguards to prevent the restriction of fundamental free expression rights, but national governments will need some way to evaluate whether the steps tech companies take to comply meet those standards. That evaluation must be both objective and balanced to protect the rights of users and copyright holders alike.
Quick background for those who missed this development: Last March, the European Parliament narrowly approved the new set of copyright rules, squeaking it through by a mere five votes (afterwards, ten MEPs admitted they'd been confused by the process and had pressed the wrong button).
By far the most controversial measure in the new rules was a mandate requiring online services to use preventive measures to block their users from posting text, photos, videos, or audio that have been claimed as copyrighted works by anyone in the world. In most cases, the only conceivable "preventive measure" that satisfies this requirement is an upload filter. Such a filter would likely fall afoul of the ban on “general monitoring” anchored in the 2000 E-Commerce Directive (which is currently under reform) and mirrored in Article 17 of the Copyright Directive.
There are grave problems with this mandate, most notably that it does not provide for penalties for fraudulently or negligently misrepresenting yourself as being the proprietor of a copyrighted work. Absent these kinds of deterrents, the Directive paves the way for the kinds of economic warfare, extortion and censorship against creators that these filters are routinely used for today.
But the problems with filters are not limited to abuse: Even when working as intended, filters pose a serious challenge for both artistic expression and the everyday discourse of Internet users, who use online services for a laundry list of everyday activities that are totally disconnected from the entertainment industry, such as dating, taking care of their health, staying in touch with their families, doing their jobs, getting an education, and participating in civic and political life.
The EU recognized the risk to free expression and other fundamental freedoms posed by a system of remorseless, blunt-edged automatic copyright filters, and they added language to the final draft of the Directive to balance the rights of creators with the rights of the public. Article 17(9) requires online service providers to create "effective and expeditious" complaint and redress mechanisms for users who have had their material removed or their access disabled.
Far more important than these after-the-fact remedies, though, are the provisions in Article 17(7), which requires that "Member States shall ensure that users...are able to rely" on limitations and exceptions to copyright, notably "quotation, criticism, review" and "use for the purpose of caricature, parody or pastiche." These free expression protections have special status and will inform the high industry standards of professional diligence required for obtaining licenses and establishing preventive measures (Art 17(4)).
This is a seismic development in European copyright law. European states have historically operated tangled legal frameworks for copyright limitations and exceptions that diverged from country to country. The 2001 Information Society Directive didn't improve the situation: Rather than establishing a set of region-wide limitations and exceptions, the EU offered member states a menu of copyright exceptions and allowed each country to pick some, none, or all of these exceptions for their own laws.
With the passage of the new Copyright Directive, member states are now obliged to establish two broad categories of copyright exceptions: those "quotation, criticism, review" and "caricature, parody or pastiche" exceptions. To comply with the Directive, member states must protect those who make parodies or excerpt works for the purpose of review or criticism. Equally importantly, a parody that's legal in, say, France, must also be legal in Germany and Greece and Spain.
Under Article 17(7), users should be "able to rely" on these exceptions. The "protective measures" of the Directive—including copyright filters—should not stop users from posting material that doesn't infringe copyright, including works that are legal because they make use of these mandatory parody/criticism exceptions. For avoidance of doubt, Article 17(9) confirms that filters "shall in no way affect legitimate uses, such as uses under exceptions or limitations provided for in Union law" and Recital 70 calls on member states to ensure that their filter laws do not interfere with exceptions and limitations, "in particular those that guarantee the freedom of expression of users".
As EU member states move to "transpose" the Directive by turning it into national laws, they will need to evaluate claims from tech companies who have developed their own internal filters (such as YouTube's Content ID filter) or who are hoping to sell filters to online services that will help them comply with the Directive's two requirements:
1. To block copyright infringement; and
2. To not block user-submitted materials that do not infringe copyright, including materials that take advantage of the mandatory exceptions in 17(7), as well as additional exceptions that each member state's laws have encoded under the Information Society Directive (for example, Dutch copyright law permits copying without permission for "scientific treatises," but does not include copying for "the demonstration or repair of equipment," which is permitted in Portugal and elsewhere).
Evaluating the performance of these filters will present a major technical challenge, but it's not an unprecedented one.
Law and regulation are no stranger to technical performance standards. Regulators routinely create standardized test suites to evaluate manufacturers' compliance with regulation, and these test suites are maintained and updated based on changes to rules and in response to industry conduct. (In)famously, EU regulators maintained a test suite for evaluating compliance with emissions standards for diesel vehicles, then had to undertake a top-to-bottom overhaul of these standards in the wake of widespread cheating by auto manufacturers.
Test suites are the standard way for evaluating and benchmarking technical systems, and they provide assurances to consumers that the systems they entrust will perform as advertised. Reviewers maintain standard suites for testing the performance of code libraries, computers and subcomponents (such as mass-storage devices and video-cards) and protocols and products, such as 3D graphics rendering programs.
We believe that the EU's guidance to member states on Article 17 implementations should include a recommendation to create and maintain test suites if member states decide to establish copyright filters. These suites should evaluate both the filters' ability to correctly identify infringing materials and non-infringing uses. The filters could also be tested for their ability to correctly identify works that may be freely shared, such as works in the public domain and works that are licensed under permissive regimes such as the Creative Commons licenses.
EFF previously sketched out a suite to evaluate filters' ability to comply with US fair use. Though fair use and EU exceptions and limitations are very different concepts, this test suite does reveal some of the challenges of complying with Article 17's requirement the EU residents should be "able to rely" upon the parody and criticism exceptions it defines.
Notably, these exceptions require that the filter make determinations about the character of a work under consideration: to be able to distinguish excerpting a work to critique it (a protected use) versus excerpting a work to celebrate it (a potentially prohibited use).
For example, a creator might sample a musician's recording in order to criticize the musician's stance on the song's subject matter (one of the seminal music sampling cases turned on this very question). This new sound file should pass through a filter, even if it detects a match with the original recording, after the filter determines that the creator of the new file intended to criticize the original artist, and that they sampled only those parts of the original recording as were necessary to make the critical point.
However, if another artist sampled the original recording for a composition that celebrated the original artist's musical talent, the filter should detect and block this use, as enthusiastic tribute is not among the limitations and exceptions permitted under the Infosoc Directive, nor those mandated by the Copyright Directive.
This is clearly a difficult programming challenge. Computers are very bad at divining intent and even worse at making subjective determinations about whether the intent was successfully conveyed in a finished work.
However, filters should not be approved for use unless they can meet this challenge. In the decades since the Acuff-Rose sampling decision came down in 1994, musicians around the world have treated its contours as a best practice in their own sampling. A large corpus of music has since emerged that fits this pattern. The musicians who created (and will create) music that hews to the standard—whose contours are markedly similar to those mandated in the criticism/parody language of Article 17—would have their fundamental expression rights as well as their rights to profit from their creative labors compromised if they had to queue up to argue their case through a human review process every time they attempted to upload their work.
Existing case-law among EU member states makes it clear that these kinds of subjective determinations are key to evaluating whether a work is entitled to make use of a limitation or exception in copyright law. For example, the landmark Germania 3 case demands that courts consider "a balancing of relevant interests" when determining whether a quotation is permissible.
Parody cases require even more subjective determination, with Dutch case law holding that a work can only qualify as a parody if it "evokes an existing work, while being noticeably different, and constitutes an expression of humor or mockery." (Deckmyn v. Vandersteen (C-201/13, 2014)).
Article 17 was passed amidst an unprecedented controversy over the consequences for the fundamental right to free expression once electronic discourse was subjected to automated judgments handed down by automated systems. The changes made in the run-up to the final vote were intended to ensure a high level of protection for the fundamental rights of European Internet users.
The final Article 17 text offers two different assurances to European Internet users: first, the right to a mechanism for "effective and expeditious" complaint and redress, and second, Article 17(7) and (4)'s assurance that Europeans are "able to rely" on their right to undertake "quotation, criticism, review" and "use for the purpose of caricature, parody or pastiche" ... "in accordance with high industry standards of professional diligence".
The Copyright Directive passed amid unprecedented controversy, and its final drafters promised that Article 17 had been redesigned to protect the innocent as well as punishing the guilty, this being the foundational premise of all fair systems of law. National governments have a duty to ensure that it's no harder to publish legal material than it is to remove illegal material. Streamlining the copyright enforcement system to allow anyone to block the publication of anything, forever, without evidence or oversight presents an obvious risk for those whose own work might be blocked through malice or carelessness, and it is not enough to send those people to argue their case before a tech company's copyright tribunal. If Europeans are to be able to "rely upon" copyright limitations and exceptions, then they should be assured that their work will be no harder to publish than any other's.
The bad news is that Twitter has disclosed a failure to protect users' phone numbers, again. The good news is that Twitter users can take steps to protect themselves.
Earlier this week, Twitter announced it had discovered and shut down “a large network of fake accounts” that were uploading large numbers of phone numbers and using tools in Twitter’s API to match them to individual usernames. This type of activity can be used to build a reverse-lookup tool, to find the phone number associated with a given username.
These tools in Twitter's API can only match phone numbers to Twitter accounts for those who 1) have “phone number discoverability” turned on in their settings and 2) have a phone number associated with their account. If neither of those are true for you, then your account was not exposed by this problem. Here's how to check your settings and make sure they are where you want them:
1. To check your discoverability settings, head to the Privacy and safety section of your account settings, then scroll down a bit and select Discoverability and contacts—or just go to https://twitter.com/settings/contacts.
2. You want “Let people who have your phone number find you on Twitter” unchecked. (And while you’re at it, make sure “Let people who have your email address find you on Twitter” is unchecked, too.) Unless you are in the EU, where the GDPR requires that features like this be opt-in, these are both checked by default.
3. To check whether or not you have a phone number associated with your account, go to the Account section of your settings and select Phone—or just go to https://twitter.com/settings/phone.
4. If you see a phone number there that you do not want associated with your profile, click Delete phone number.
There are a number of reasons you might have a phone number here: you may have added it when you signed up (Twitter sometimes requires phone numbers for new accounts), or when you turned on SMS-based two-factor authentication. Note that, even if you disable two-factor authentication, the phone number you used for it will still be hanging around in your account information, and you’ll have to go to that “Phone” section to affirmatively delete it from your account.
Most egregiously on Twitter’s part, you may also have a phone number in your account because Twitter made you put it there to prove you’re not a spammer. When Twitter marks an account as a “bot,” it may require the account holder to provide a phone number to unlock and get back into their account.
If Twitter is going to make users provide this sensitive identifying information to create and even regain access to their accounts, it has a responsibility to protect that information—and it has not fulfilled that responsibility.
Twitter has publicly disclosed a security “incident” that points to long-standing problems with how the service handles phone numbers. Twitter announced it had discovered and shut down “a large network of fake accounts” that were uploading large numbers of phone numbers and using tools in Twitter’s API to match them to individual usernames. This type of activity can be used to build a reverse-lookup tool, to find the phone number associated with a given username.
Problems with tools that allow users to find accounts using the phone numbers associated with them are not new at Twitter (or at Facebook, for that matter). And, given the way these features are designed, their potential for abuse is not likely to go away anytime soon.
The best way for Twitter to protect its users is to minimize the number of accounts with phone numbers tied to them, and make it clear to users when and how those numbers might be exposed. That’s why Twitter needs to stop pressuring users to add their phone numbers to their profiles and stop making those phone numbers discoverable by default.
When you are new to a service or first download an app, you may see a prompt to upload your contacts to find people you already know on the app. Twitter offers one of those contact upload tools.
The problem is that, if Twitter wants to connect you with your friends via their phone numbers, it needs to offer an API to support it. While that kind of API can and should come with limitations to prevent someone from maliciously revealing people’s identities and contact information, there will almost always be a way around it. In Twitter’s case, one of the limitations in place was to reject anyone who tried to upload a long list of sequential phone numbers—a sign that this person was almost certainly not uploading an address book in an attempt to find friends.
The workaround is almost comically simple: someone could just upload a long list of randomized phone numbers instead. And that’s how the security researcher whose work tipped Twitter off to this problem was able to match up phone numbers and usernames not just for people in his contacts, but for 17 million unsuspecting Twitter users, including high-profile officials and politicians around the world.Who It Affects
This tool can only match phone numbers to Twitter accounts for those who 1) have “phone number discoverability” turned on in their settings and 2) have a phone number associated with their account. If neither of those are true for you, then your account was not exposed by this problem. For a step-by-step guide to checking your settings, visit our tutorial here.
This is not the only time Twitter has recently messed up its management and protection of user phone numbers: just last October Twitter fessed up to using users’ two-factor authentication numbers for targeted advertising.
But, then as now, Twitter’s plans to fix the problems that exposed user information in the first place are troublingly vague. Twitter’s announcement claimed that the company has made a “number of changes to this [contact upload tool] so that it could no longer return specific account names in response to queries.”
But what exactly are those changes, and how will they work? After failing the public trust, Twitter owes its users more transparency so that they can judge for themselves whether the fixes were adequate.
So, you own or are thinking of buying a Ring camera. This post outlines a list of privacy and civil liberties concerns we have with Amazon’s Ring system so that you can be a more informed consumer, or—if you already own a Ring camera—be a more considerate neighbor.If You’re Thinking of Buying a Ring Camera
1. You are not the only one who can access your footage.
Your Ring footage isn’t private. It’s in the cloud. That means that you are not the only one with access to your footage. Think about all of the personal events cameras inside and outside of your home will capture. Do you want strangers being able to learn your routines by watching you leave and return to your house every day? And think about the footage of others—friends, family, and neighbors who are nearby. Would they be comfortable with video evidence of when they walk their dogs or when their car isn’t home at its normal hour?
With a Ring camera, Amazon employees have access to the cloud where your footage is stored—and a few have already been fired for unauthorized access to people’s personal videos.
If your town’s police department has a partnership with Ring, you can also anticipate getting email requests from them asking for footage from your camera any time a suspected crime occurs nearby. Planning on refusing to share your footage? In some circumstances, police can still get your footage by bringing a warrant straight to Amazon—and you would never know.
Knowing all this, would you be comfortable if other houses—perhaps across the street from your own—had a Ring camera pointing at you? If your neighbors see that you have a Ring, consider that they may purchase one as well.
2. Bad actors have used them to look inside homes and talk to children.
Ring is currently involved in a proposed class action lawsuit concerning a number of high profile incidents in which people were able to gain access to Ring cameras and use them to traumatize children and harass families. Ring blamed these incidents on their customers by saying that people had made the mistake of repeating usernames and passwords that had been previously released in hacks. However, as EFF technologists have argued, gaining access to Ring cameras by repeatedly trying to log in with leaked usernames and passwords until you find one that works would not have been possible if Ring took security seriously. Ring did not put basic obstacles into place to stop repeat log in attempts, nor did they encourage customers to enable two-factor authentication. These are standard issue protections on other devices that Ring seems to have been negligent in enforcing.
This most recent round of controversies doesn’t even address previous vulnerabilities discovered in Ring, like one discovered last year that revealed a user’s Wifi password.
3. There is no conclusive evidence that Ring cameras prevent crime.
The idea that Ring cameras prevent crime is based on the idea that a potential burglar will see the Ring doorbell, recognize it, and then decide to turn around. But time and time again, research has shown that surveillance won’t necessarily prevent crime. What it can do, however, is create rifts between citizens and authorities, invade people’s privacy, and exacerbate racial bias by allowing people with preconceived notions of who is deemed “suspicious” to apply those assumptions on a mass scale.
4. Ring’s system is likely to make you paranoid.
Maybe you live in a pretty safe neighborhood. Maybe you just purchased a Ring camera for the purpose of keeping an eye on the front door, or being able to talk to people who ring your doorbell. But either way, Ring’s default setup is primed to instill paranoia: Ring doorbells send you an alert whenever the motion activation is triggered, which means that your phone will buzz every time a squirrel, falling snow, a dog walker, or a delivery person set off the Ring. There’s no faster way to convince someone incorrectly that their house is under siege and their neighborhood is crime-ridden than to be startled by every innocent passerby.If You Already Own a Ring Camera
You have a Ring camera, or several, outside your house. But these incessant news stories about Ring are bothering you. What can you do about it? Here’s a few general tips to improve your security and use your Ring camera in a more responsible and conscientious way.
1. Don’t point your Ring camera at anyone’s property but your own.
This might be a tricky one depending on your housing situation, but it’s imperative not to put your neighbor’s house under constant surveillance. You installed the camera to watch your porch, not to stare through your neighbor’s window all day long. You wouldn’t want a camera controlled by other people, and accessible by Amazon employees and the police, documenting when you’re home or not, so why would you do that to your neighbors?
2. Tell police to get a warrant.
When police want footage from your Ring camera, and your police department has a partnership with Ring, the request will come in the form of an email. It will have two options: “Share footage” or “Review footage before you share.” There is also a third option, which is to ignore the email, which requires police to get a warrant. Getting a warrant means that police need to share with a judge a legitimate reason why they need your footage. In an ideal world, the police would present you with the warrant, allowing you to see that a judge has signed off on the necessity of the request. Even if that warrant is going straight to Amazon, however, it’s still important that a court have oversight over when and to what extent police get footage.
3. Use strong passwords and unique usernames.
Given the fact that people have exploited Ring’s security, and because any large collection of very personal material will always have a target it on it, it’s more important than ever that you use strong, random, and original credentials when you create accounts. EFF has resources that can help you to generate long random passwords.
4. Delete footage as often as you can.
Amazon’s Ring appears to have data storage plans that allow different access and retention times for footage on Amazon’s cloud. Depending on your storage plan, we recommend that any and all footage your camera records be deleted as soon as possible. Although it is unclear if Amazon has any way of recovering your footage after deleted, this process should reduce the amount of ambient conversations or unintended captured footage of people currently retained and sitting on the server.
5. Take steps to limit third-party trackers.
A recent study by EFF found that the Ring app for Android sends personalized and identifiable data to third-party trackers. These trackers do analytics for Ring, telling them how people are using the app, but also receive several pieces of information like what kind of phone you’re using or your personal IP address. Even small amounts of information allow tracking companies to form a “fingerprint” that follows the user as they interact with other apps and use their device, allowing trackers to spy on a user’s digital life. Whether you’re using an iPhone or Android, there are ways that you can make it harder for mobile ads to track you on your phone.
6. Don’t let surveillance convince you everyone is “suspicious.”
This is an important point because it might just be the one that prevents people from getting hurt. Not everyone you see through your Ring camera is doing something suspicious. If someone steps under an awning to avoid a downpour of rain, or if a realtor is in your neighborhood looking at homes, err on the side of trust. Just because you think a person looks like they “don’t belong” in your neighborhood doesn’t mean they’re a threat—and it definitely means you don’t have to broadcast their face on an app or call the police.
Evan Greer is many things: A musician, an activist for LGBTQ issues, the Deputy Director of Fight for the Future, and a true believer in the free and open internet. Evan is a longtime friend of EFF, and it was great to chat with her about the state of free expression, and what we should be doing to protect the internet for future activism.
Among the many topics we discussed was the tension that often arises between justice-oriented work and free expression activism, and how policies that promote censorship—no matter how well-intentioned—have historically benefited the powerful and harmed vulnerable or marginalized communities. This is something that we think about a lot in our work at EFF. Whether we’re talking about policies intended to curb online extremism or those meant to prevent sex trafficking, it’s important that we look at the potential collateral damage that will inevitably occur. In this interview, Evan talks about what we as free expression activists should do to get at that tension and find solutions that work for everyone in society.
We also talked about something near and dear to both of us: The power that the Internet still has to connect people across borders, and to allow disparate groups to mobilize. We’ll hear from Evan about what the Internet has meant for her music and activism, and what we need to do to protect it for those who come next.
Jillian C. York: What does free speech mean to you?
To me, it’s about the ability to challenge authority and power and establish norms. One of the things I always try to remind people now who are pushing for more regulation of speech, censorship, control over the flow of information is how rapidly norms can change and how recently in our history it was considered a norm for homosexuality to be criminalized or stigmatized.
For me it’s about recognizing that ideas currently seen as fringe or controversial may be seen as totally mainstream in even a matter of decades. And if we prevent people from saying those things out loud because we don’t like them right now or because we think they’re controversial then we’re actually freezing society as a whole to progress. And so I think, to me, it’s the most fundamental issue of tension between authoritarian structures and institutions that, almost universally, tend to trend toward the status quo. Institutions have, sort of built into their DNA, the [desire] to continue to exist. And to do that, the institution needs to maintain or replicate the conditions in which it rose to power, whether that institution is a government, a company, a religious group, or even a popular subculture. And so, to me, free speech or expression is about the ability to challenge those accepted norms and institutionally-enforced norms toward building a better society where we can build norms based in people’s human need rather than just based on momentum or status quo.
York: Absolutely. I think what you said about the fact that ideas currently seen as fringe may become mainstream...the reverse may be true as well. So what would you say right now to the more progressive people that aren’t so keen on this idea of free speech?
I think it’s super important to look through history. I think Cindy Cohn had an op-ed in Wired that summarized this in a way that I’ve been dancing around...I think that if you just look throughout history, even going back to the times of kings, it’s pretty easy to draw a correlation that censorship, largely, has always benefited those in power at the expense of those who do not have power. And even when [censorship has] been explicitly marketed as, or genuinely intended to benefit marginalized people and communities and voices, in the end, the net effect always seems to be that it reinforces the status quo and props up existing power structures. And if those power structures are broken or unjust, then more censorship—even if it’s intended to help the people being hurt by those power structures—largely ends up codifying those power structures and exacerbating discrimination and injustice, and takes away one of the most powerful tools we have to disrupt, or undermine, or overthrow those power structures.
I think it’s super important too for people to look at the edges. We’ve also made a lot of progress [recently]. Ideas that were fringe not that long ago are becoming mainstream, and the opposite is also true. And that leads folks, I think, to believe that fringe ideas are bad ideas. But it’s important to remember that, on a concrete level, anyone who’s done support for US-held political prisoners is acutely aware of the ways the US government can dictate things as simple as how, if you express support for an organization not based in the US, it can land you in prison. Or how payment processors have kicked off people who are doing political prisoner organizing, or organizing to support activists on the ground in Palestine, or in other places where the US government has an imperial interest in preventing money from flowing.
And so it’s easy to look around and not see mainstream or larger organizations that represent marginalized communities being hurt by these policies, but if you look to the fringes, there’s already this long, documented history of harm.
There’s also a logical fallacy here: The argument goes that these big centralized platforms are already doing a terrible job of moderating content; therefore they should moderate more. I just don’t understand that. I think we can and should push them to do better with the moderation practices that they have, and to be more transparent about them so we can hold them accountable and show where they’re falling short. But I don’t get jumping to “let’s do more of the thing they’re doing a bad job at” when we haven’t even fixed the fact that they’re doing a bad job of it, and I think most of us agree that we’re not sure they can ever do a good job of it.
York: You know that I agree with this. So let me take this in a bit of a different direction and ask: Whether there’s a rise in hate speech, or an amplification of it, if we don’t see content moderation as the solution or at least the only solution, what do you believe we should do to counter it?
I think this is the fundamental question and I think it’s super important that those of us who fiercely believe in defending free expression ask ourselves this question, because it’s not acceptable at this point to be like “yeah, white supremacy on the internet is a problem, but is it really that big of a problem?” That’s not okay. It’s really that big of a problem, and it needs to be fought and addressed.
I think your question of whether there’s more of it or it’s just being amplified is a valid question to ask, but in the end, the net effect is the same. This corrosive ideology that’s harmful to human society and leads to actual acts of violence needs to be fought at every turn. I’m kind of old school about it, I’m more about punching Nazis than I am about getting [corporations] to censor them.
I think we need the internet, and we need an open Internet, in order to mobilize and organize against the systems and structures and underlying ideologies that lead to all of this. I think it’s super important too that we zoom out and think about this in a pretty critical way. I think if you ask yourself “Which is doing more harm to society: The prison-industrial complex, or a few pretty loud white supremacists on the Internet?” [you’ll see that] the prison-industrial complex as whole—which is an authoritarian white supremacist structure built into our society and largely accepted—is just in terms of numbers committing far more atrocities than these high-profile assholes on the Internet, but we’re spending tremendous amounts of time and energy dealing with them.
I’m not saying we shouldn’t be dealing with them but we should think about what we really want to do and how we want to organize. I think that if the solution that we come up with to deal with those assholes on the internet kneecaps our ability to eventually overthrow and dismantle the prison-industrial complex, are we actually creating a net benefit for society and for the people most affected by this structural oppression? How do we address the underlying income inequality and gross lack of basic human rights and dignity that a huge number of people in [the US] and around the world have that fuels so much of this online nonsense?
Call me a radical, but I like the look at problems at their roots. We’ve seen this with respect to [media policy] over time, like video games are the culprit for school shootings or encryption is making people do bad things. We’ve smacked down these arguments again and again, but here we are again. Do we want to spend all of our time and energy trying to deal with the speech itself, or addressing the underlying systems of oppression that are actually doing the harm and that can only be confronted with meaningful, deep grassroots organizing and community building, and by listening to the people who are being most affected, and to the people at the fringes whose voices are most likely to be silenced by any scheme that we come up with to deal with a relatively small number of destructive and dangerous assholes who are sort of ruining the party for everyone on the Internet.
York: It’s funny, that reminds me of my conversation with Asta Helgadottir, who spoke about how the amount of money that goes toward copyright enforcement is so much bigger than that which goes toward countering child sexual abuse imagery.
Yeah, I can see that. It’s also really important that we as free speech defenders can’t be the party of do-nothing. We can’t just be like “yeah it’s working fine the way it is, just leave it alone” because it’s not working fine the way it is. We as Fight for the Future and me personally have been kicking around different possibilities, policy solutions—but short of fixing the underlying problems, I don’t think that anything solves the problem of hate speech on the Internet. There were always be people who use the Internet to amplify what they’re doing, and the internet will always amplify the mainstream. You can be a Republican politician basically calling for policies that amount to genocide and you’re not going to get wrapped up in Facebook’s hate speech algorithm because you’re using all the right words. That’s just another example of how these policies ethically fail.
But beyond that, there are other things worth looking at. The structural thing, in terms of the internet itself, and speech, is centralization. And so finding policies that address the fact that there are now basically three websites that matter where you can speak and be heard, and these websites control those inordinate amount of speech and people’s ability to hear it...that’s what creates the problem that makes these questions—like “should we ban Alex Jones?”—hard to answer. If there were twenty platforms, that becomes a pretty easy question. But the weight of those decisions is so much greater when there are so few platforms or places where people can speak, and where having your own website doesn’t matter anymore when it can’t be shared on those three platforms.
That’s the underlying problem, and so we should look at policies that address that. Antitrust is interesting to look at, but I don’t think it’s a silver bullet that’s going to solve all of this. I think algorithmic transparency, or moderation transparency is another one—Facebook should make it easy for us, or for journalists, to really get a sense of what they’re taking down and how they’re making those decisions, so we can see the collateral damage that’s happening and hold them accountable to make sure they’re enforcing their existing policies fairly. It doesn’t involve taking down more speech, it means figuring out what they’re taking down now before we ask them to take down more.
And then another thing that I’ve been toying with, and I’m not totally sure about—it’s almost in the realm of Twitter banning political ads—is a temporary moratorium on algorithmic amplification entirely. There’s a huge difference between “anyone can say what they want” and “anyone can say what they want and Facebook is going to put its thumb on the scale and amplify certain types of speech that it knows will generate controversy and clicks and comments” and that basically creates an incentive to amplify some of the worst kinds of speech. Or on YouTube, where a kid starts watching a video about gaming, and then three videos later is being recommended white supremacist content. There’s a big difference between that and letting things go viral—like the internet used to be. I’m not going to pretend awful things don’t go viral, but it’s different from Facebook intentionally weighting the scales and pushing content that it knows is hateful but that makes it money.
I think that’s an interesting thing to look at: How much of the way we’re seeing the conversation being distorted right now is because of that amplification or because there’s an uptick in the number of assholes who believe in this ideology.
So I’m not sure about it, but the decisions we’re making about content moderation right now are arguably some of the most important decisions that humans are making right now, period, and they’re going to shape our civilization, so we need to make these decisions with that level of seriousness and thinking critically while we do it, and not just doing it on the news cycle or based on partisan winds or which way they’re blowing.
York: Absolutely—Okay, let me change directions and ask you my favorite question. Who’s your free speech hero?
I’d have to say Chelsea Manning. First of all, because she’s continuing to suffer at the hands of the US government for blowing the whistle and speaking out and fighting for what she believes in, but also because the conversations we’ve had over the years while she was incarcerated were so instrumental in me shaping my thoughts on this. She’s such a critical thinker and someone who’s directly experienced the ways in which government crackdowns on expression can go so horribly wrong. And so she has a very smart analysis on this. I really value our friendship and the ways she thinks about these issues and has acted on them.
York: I think that’s a great answer, and you’re not the only one who’s named her. She’s a hero for so many of us. Okay, so the one other thing I want to ask is to pull in the fact that you’re a musician and ask: Is there anything from that background that has inspired your work?
There’s two angles there that are really interesting. Just in general, my life as an independent musician in the early days of the Internet shaped why I care about this, and why I think it’s so crucial to defend a free and open internet.
And for me as a trans, independent artist who never had a record label and was writing songs about overthrowing capitalism and being queer, I was never going to find mainstream success in a world where the gatekeepers of the musical community were executives and mostly white male writers for Rolling Stone (although even that has shifted over the past decades).
So me and some of my compatriots were some of the first artists to put our music online for free download. We were putting our music on Archive.org before Napster was a thing. I instantly saw the huge potential of it. I’d show up at a show in Prague where I don’t speak the language and there are like, 200 punk kids who know all the words of my songs. I’d never been there or sold a CD in that area, but the Internet existed, and people were able to find this music they connected with and share it, and instantly, it felt like this is the most powerful force I’d ever seen for lifting up the voices that are so often left out of the conversation in our society, and I’ve never lost sight of that. More and more we see the downsides of it too, the ways in which it can be used to silence people and amplify existing forces of oppression but...I still believe in the Internet, and I still believe that it’s a net positive force for society, and that the fights we have over policy that surround it are so essential because of that revolutionary and transformative power that it still holds, even with all of the downsides.
But also, having kind of an understanding of the ways that people are used as pawns in policy. For example, I’m a member of BMI, I get their emails, and they’re constantly preying on the fears of artists and on our self-esteem and feelings of being screwed over by an unscrupulous industry and using that to convince people that copyright maximalist policies are awesome and what is needed to protect individual artists and creators when you and I both know that’s not true and in fact those policies largely line the pockets of executives and tech companies that are now the new gatekeepers of the music industry.
To me, that’s informed the way I think about all of these issues. I’m acutely aware of how policies that claim to do one thing can easily be used to do something much more nefarious. I think with copyright, that’s just such a clear example where none of these policies would have benefited me as an independent artist. That understanding has shaped the way I think about these things more broadly and always makes me question them, even when something is well-intentioned. There are just always ways they can backfire, and when they do, it hurts the weakest at the benefit of the most powerful.
In a big win for free speech on the Internet, an EFF lawsuit has compelled the nation’s second-largest public university to stop censoring dissent on its social media. To settle our First Amendment case, Texas A&M University (TAMU) agreed to end its automatic and manual blocking of comments posted on its Facebook page by People for the Ethical Treatment of Animals (PETA) about the school’s dog labs. This legal victory is an important step forward in EFF’s nationwide campaign to end censorship in government-operated social media.
Why We Fight Censorship in Government Social Media
Social media has changed the way people all over the world connect and communicate. As the U.S. Supreme Court explained in Packingham v. North Carolina (2017): “While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace—the vast democratic forums of the Internet in general, and social media in particular.” (The Court in Packingham repeatedly cited EFF’s amicus brief.)
Government has fully embraced social media to communicate with the public. From federal departments to small-town bureaus, government agencies use their social media to conduct official business and engage in two-way communications with the public, and to facilitate communications among members of the public. In the words of a growing number of courts, governments thereby create “interactive spaces” where members of the public can talk back to the government and communicate with each other. While the government generally has the prerogative to control the content of its social media posts, the First Amendment protects the right of technology users to express their own views in the interactive spaces.
Unfortunately, many government officials respond to criticism in social media by deleting user comments or blocking users. This burdens First Amendment rights. Perhaps most famously, President Trump blocked critics from his Twitter account, prompting a lawsuit from the Knight Institute, which EFF supported with amicus briefs in the district and appellate courts. Many state and local officials have also censored their social media. These include the Hunt County (Texas) Sheriff’s Office, against which EFF filed an amicus brief, and the City of Norman (Oklahoma), against which EFF provided oral argument.
Our PETA v. TAMU lawsuit
In 2016, PETA launched an advocacy campaign against TAMU’s dog labs, which led some supporters to post critical comments on TAMU’s Facebook page. TAMU responding by blocking comments from PETA and its supporters. Its censorship scheme had two components. First, TAMU used automatic filters to identify and block comments that contained words like “PETA” and “lab.” Second, if PETA-generated posts did not contain these forbidden words, then TAMU manually deleted them.
In May 2018, EFF filed a First Amendment lawsuit on behalf of PETA against TAMU. We sought forward-looking injunctive and declaratory relief. We filed in the U.S. District Court for Southern Texas. Our co-counsel are Gabriel Walters of the PETA Foundation (which includes PETA’s law office) and Christopher Rothfelder of Houston-based Rothfelder & Falick, L.L.P.
At oral argument in September 2018, the district court denied TAMU’s motion to dismiss. Ruling from the bench, the court acknowledged that “the foundational right of all Americans,” namely the freedom of speech, applied to new technologies. The court added: “There were folks who said the First Amendment doesn’t cover movies and television … They weren’t successful.” The court also expressed skepticism of TAMU’s suggestion that PETA could express itself elsewhere: “You sound like [an unsuccessful advocate before] the Supreme Court, when it says, ‘You don’t need to use the auditorium, you can print posters. And you don’t need to print posters, you can march in the streets. You don’t need to march in the streets.’ If you added them all up, you just don’t need to talk.”
- Facebook pages contain interactive spaces for speech by page visitors.
- TAMU by policy and practice allows a dizzying quantity and variety of speech by the public on TAMU’s Facebook page.
- TAMU used automatic and manual tactics to block comments from PETA and its supporters about TAMU’s dog lab.
We also demonstrated that TAMU’s censorship scheme violated the First Amendment. We made three principal arguments.
First, TAMU engaged in viewpoint-based discriminated against PETA, that is, against PETA’s viewpoint that TAMU’s dog labs are cruel and should be closed. Viewpoint discrimination is unlawful in all types of government forums.
Second, TAMU engaged in content-based discrimination against PETA, that is, against the entire subject matter of the dog labs. TAMU designated the interactive spaces of its Facebook page as a public forum for speech by the public. In such forums, TAMU cannot engage in content-based discrimination, even if it is not also engaged in viewpoint-based discrimination.
Third, TAMU violated PETA’s First Amendment right to petition the government for redress of grievances. This includes PETA’s right to use TAMU’s Facebook page to demand that TAMU close its dog labs.
Our PETA v. TAMU victory
Soon after our summary judgment motion, the parties began to actively explore settlement.
In February 2020, these discussion bore fruit. TAMU and PETA finalized a settlement agreement in which TAMU committed to dismantling its censorship scheme. First, TAMU will remove all settings that block or filter PETA’s comments on TAMU’s Facebook page. Second, TAMU will not exercise viewpoint discrimination when administering its Facebook page, including against PETA and its supporters. Third, TAMU will not automatically or manually block or filter PETA’s comments on TAMU’s Facebook page. TAMU also agreed to pay PETA’s attorney fees incurred in prosecuting this case.
TAMU reserved its right to remove PETA comments that violate its new Facebook Use Policy, which TAMU adopted after PETA filed this suit. PETA in turn reserved its right to bring a new lawsuit against TAMU’s new policy. We expect that TAMU has learned its lesson, and will comply with the First Amendment in its administration of its Facebook page.
But of course, eternal vigilance is the price of digital liberty. EFF will continue to work as the leading defender of free speech on the Internet. This includes the right of all technology users to express themselves in the interactive spaces of government-operated social media, like TAMU’s Facebook page.Related Cases: PETA v. Texas A&M
There’s a new and serious threat to both free speech and security online. Under a draft bill that Bloomberg recently leaked, the Attorney General could unilaterally dictate how online platforms and services must operate. If those companies don’t follow the Attorney General’s rules, they could be on the hook for millions of dollars in civil damages and even state criminal penalties.
The bill, known as the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, grants sweeping powers to the Executive Branch. It opens the door for the government to require new measures to screen users’ speech and even backdoors to read your private communications—a stated goal of one of the bill’s authors.
Senators Lindsay Graham (R-SC) and Richard Blumenthal (D-CT) have been quietly circulating a draft version of EARN IT. Congress must forcefully reject this dangerous bill before it is introduced.EARN IT Is an Attack on Speech
EARN IT undermines Section 230, the most important law protecting free speech online. Section 230 enforces the common-sense principle that if you say something illegal online, you should be the one held responsible, not the website or platform where you said it (with some important exceptions).
We know how Barr is going to use his power on the “best practices” panel: to break encryption.
Section 230 has played a crucial role in creating the modern Internet. Without it, social media as we know it today wouldn’t exist, and neither would the Internet Archive, Wikimedia, and many other essential educational and community resources. And it doesn’t just protect tech platforms either: if you’ve ever forwarded an email, thank Section 230 that you could do that without inviting legal risk on yourself.
EARN IT would establish a “National Commission on Online Child Exploitation Prevention.” This Commission would include the Chairman of the Federal Trade Commission, the Attorney General, the Secretary of Homeland Security, and 12 other members handpicked by leaders in Congress. The Commission would be tasked with recommending “best practices for providers of interactive computer services regarding the prevention of online child exploitation conduct.” But far from mere recommendations, those “best practices” would bring the force of law. Platforms that failed to adhere to them would be stripped of their Section 230 protections if they were accused (either in civil or criminal court) of carrying unlawful material relating to child exploitation.
Laws relating to restrictions on speech must reflect a careful balance of competing policy goals and protections for civil liberties. Lawmakers can only strike that balance through an open, transparent lawmaking process. It would be deeply irresponsible for Congress to offload that duty to an unelected and unaccountable commission.
It gets worse. If the Attorney General disagrees with the Commission’s recommendations, he can override them and write his own instead. This bill simply gives too much power to the Department of Justice, which, as a law enforcement agency, is a particularly bad choice to dictate Internet policy.
EARN IT is a direct threat to constitutional protections for free speech and expression. To pass constitutional muster, a law that regulates the content of speech must be as narrowly tailored as possible so as not to chill legitimate, lawful speech. Rather than being narrowly tailored, EARN IT is absurdly broad: under EARN IT, the Commission would effectively have the power to change and broaden the law however it saw fit, as long as it could claim that its recommendations somehow aided in the prevention of child exploitation. Those laws could change and expand unpredictably, especially after changes in the presidential administration.
If you’d like a preview of what types of measures the Attorney General might demand platforms adhere to, it’s worth examining what the current AG has already said platforms should be forced to do.EARN IT Is an Attack on Security
Throughout his term as Attorney General, William Barr has frequently and vocally demanded “lawful access” to encrypted communications, ignoring the bedrock technical consensus that it is impossible to build a backdoor that is only available to law enforcement. Barr is far from the first administration official to make impossible demands of encryption providers: he joins a long history of government officials from both parties demanding that encryption providers compromise their users’ security.
We know how Barr is going to use his power on the “best practices” panel: to break encryption. He’s said, over and over, that he thinks the “best practice” is to always give law enforcement extraordinary access. So it’s easy to predict that Barr would use EARN IT to demand that providers of end-to-end encrypted communication give law enforcement officers a way to access users’ encrypted messages. This could take the form of straight-up mandated backdoors, or subtler but no less dangerous “solutions” such as client-side scanning. These demands would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their own users’ security, making all of us more vulnerable to criminals.
If you need more evidence that EARN IT is a thinly veiled attack on your right to secure and private communications, take note of one more subtle change it would bring to the law. Under EARN IT, a plaintiff would no longer be required to prove that a defendant actually knew about sexual exploitation in order to win a lawsuit against them. The plaintiff would only be required to prove the defendant acted “recklessly.” Lowering this standard opens the door wide to lawsuits based simply on providing secure, end-to-end-encrypted communications channels—a practice the Attorney General has characterized as “irresponsible”—which could be used to argue that encryption providers are acting “recklessly.”EARN IT Is an Attack on Innovation
Weakening Section 230 makes it much more difficult for a startup to compete with the likes of Facebook or Google. Giving platforms a legal requirement to screen or filter users’ posts makes it extremely difficult for a platform without the resources of the big five tech companies to grow its user base (and of course, if a startup can’t grow its user base, it can’t get the investment necessary to compete).
While we don’t know exactly what speech moderation requirements a Commission would come up with, it’s worth considering how EARN IT stacks the deck, with a paltry two members representing “small” (under 30 million users!) platforms and no one with expertise in free speech and civil liberties. Such a stacked Commission is likely to make recommendations that favor large tech companies.
Imagine the scrappy, would-be inventor of a new alternative to Facebook having to first convince the United States government to change its rules before she can even launch a beta app.
But even if the Commission were staffed more evenly, its rules would likely still confound new innovation in the Internet space. That’s because when it comes to online platforms and services, it’s very difficult to craft regulation that can adapt to new business models. Part of the beauty of Section 230 is that it’s enabled business models and types of services that didn’t exist yet when it passed in 1996. When Congress passed Section 230, no one had imagined Wikipedia. If Congress had implemented a detailed list of requirements for how platforms must be run in order to enjoy Section 230 protections, then Wikipedia would likely be illegal.
Sure, the Commission (or more accurately, the Attorney General) would be able to update the best practices, but how could they update them to allow a product that doesn’t exist yet? Imagine the scrappy, would-be inventor of a new alternative to Facebook having to first convince the United States government to change its rules before she can even launch a beta app.
There’s a common misconception that Section 230 is a handout to big Internet companies. In truth, undermining Section 230 does far more to hurt new startups than to hurt Facebook and Google. 2018’s poorly-named Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA)—the only major change to Section 230 since it passed in 1996—was endorsed by nearly every major Internet company. One consequence of FOSTA was the closure of a number of online dating services, a niche that Facebook set about filling just weeks after the law passed.EARN IT Is Unnecessary
EARN IT would simply have no effect—positive or negative—on the DOJ’s ability to prosecute online platforms that criminally aid and abet child abuse.
Remember, Section 230 does not exempt online intermediaries from violations of federal criminal law. If the Department of Justice believes that an online platform is breaking the law by knowingly distributing child exploitation imagery, then it can and must enforce the law. What’s more, if an Internet company discovers that people are using its platforms to distribute child sexual abuse material, current federal law requires it to provide that information to the National Center for Missing and Exploited Children and to cooperate with law enforcement investigations.
EARN IT is anti-speech, anti-security, and anti-innovation. Congress must reject it.
Clearview AI—Yet Another Example of Why We Need A Ban on Law Enforcement Use of Face Recognition Now
This week, additional stories came out about Clearview AI, the company we wrote about earlier that’s marketing a powerful facial recognition tool to law enforcement. These stories discuss some of the police departments around the country that have been secretly using Clearview’s technology, and they show, yet again, why we need strict federal, state, and local laws that ban—or at least press pause—on law enforcement use of face recognition.
Clearview’s service allows law enforcement officers to upload a photo of an unidentified person to its database and see publicly-posted photos of that person along with links to where those photos were posted on the internet. This could allow the police to learn that person’s identity along with significant and highly personal information. Clearview claims to have amassed a dataset of over three billion face images by scraping millions of websites, including news sites and sites like Facebook, YouTube, and Venmo. Clearview’s technology doesn’t appear to be limited to static photos but can also scan for faces in videos on social media sites.
Clearview has been actively marketing its face recognition technology to law enforcement, and it claims more than 1,000 agencies around the country have used its services. But up until last week, most of the general public had never even heard of the company. Even the New Jersey Attorney General was surprised to learn—after reading the New York Times article that broke the story—that officers in his own state were using the technology, and that Clearview was using his image to sell its services to other agencies.
All of this shows, yet again, why we need to press pause on law enforcement use of face recognition. Without a moratorium or a ban, law enforcement agencies will continue to exploit technologies like Clearview’s and hide their use from the public.
Law Enforcement Abuse of Face Recognition Technology Impacts Communities
Police abuse of facial recognition technology is not theoretical: it’s happening today. Law enforcement has already used “live” face recognition on public streets and at political protests. Police in the UK continue to use real-time face recognition to identify people they’ve added to questionable “watchlists,” despite high error rates, serious flaws, and significant public outcry. During the protests surrounding the death of Freddie Gray in 2015, Baltimore Police ran social media photos against a face recognition database to identify protesters and arrest them. Agencies in Florida have used face recognition thousands of times to try to identify unknown suspects without ever informing those suspects or their defense attorneys about the practice. NYPD officers appear to have been using Clearview on their personal devices without department approval and after the agency’s official face recognition unit rejected the technology. And even Clearview itself seems to have used its technology to monitor a journalist working on a story about its product.
Law enforcement agencies often argue they must have access to new technology—no matter how privacy invasive—to help them solve the most heinous of crimes. Clearview itself has said it “exists to help law enforcement agencies solve the toughest cases.” But recent reporting shows just how quickly that argument slides down its slippery slope. Clifton, New Jersey officers used Clearview to identify “shoplifters, an Apple Store thief and a good Samaritan who had punched out a man threatening people with a knife.” And a lieutenant in Green Bay, Wisconsin told a colleague to “feel free to run wild with your searches,” including using the technology on family and friends.
Widespread Use of Face Recognition Will Chill Speech and Fundamentally Change Our Democracy
Face recognition and similar technologies make it possible to identify and track people in real time, including at lawful political protests and other sensitive gatherings. Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.
The right to speak anonymously and to associate with others without the government watching is fundamental to a democracy. And it’s not just EFF saying that—the founding fathers used pseudonyms to debate what kind of government we should form in this country in the Federalist Papers, and the Supreme Court has consistently recognized that anonymous speech and association are necessary for the First Amendment right to free speech to be at all meaningful.
What Can You Do?
Clearview isn’t the first company to sell a questionable facial recognition product to law enforcement, and it probably won’t be the last. Last year, Amazon promotional videos encouraged police agencies to acquire that company’s face “Rekognition” technology and use it with body cameras and smart cameras to track people throughout cities; this was the same technology the ACLU later showed was highly inaccurate. At least two U.S. cities have already used Rekognition.
But communities are starting to push back. Several communities around the country as well as the state of California have already passed bans and moratoria on at least some of the most egregious government uses of face recognition. Even Congress has shown, through a series of hearings on face recognition, that there’s bipartisan objection to carte blanche use of face recognition by the police.
EFF has supported and continues to support these new laws as well as ongoing legislative efforts to curb the use of face recognition in Washington, Massachusetts, and New York. Without an official moratorium or ban, high-level attorneys have argued police use of the technology is perfectly legal. That’s why now is the time to reach out to your local city council, board of supervisors, and state or federal legislators and tell them we need meaningful restrictions on law enforcement use of face recognition. We need to stop the government from using this technology before it’s too late.
Once appearing to be a done deal, the sale of the .ORG registry to private equity is facing new delays and new opposition, after a successful protest in front of ICANN last week by nonprofits and an intervention by the California Attorney General. Private equity firm Ethos Capital’s proposed $1.1 billion purchase of the Public Interest Registry (PIR) has raised nearly unanimous opposition from the nonprofit world, along with expressions of concern from technical experts, members of Congress, two UN Special Rapporteurs, and U.S. state charities regulators. ICANN, the nonprofit body that oversees the Internet’s domain name system, has found itself under increasing pressure to reject the deal.“ICANN, You Can Stop The Sale!”
Last Friday’s protest at ICANN’s Los Angeles headquarters was the culmination of two months of intense backlash to the sale by nonprofits from around the globe, from The Girl Scouts of America, Consumer Reports, and the YMCA to Wikimedia and Oxfam. Nonprofit professionals and technologists gathered to tell ICANN their concerns in person: a private equity–owned firm running the .ORG registry would have strong incentives to undermine the privacy and free speech rights of nonprofit organizations, and to exploit them financially, in pursuit of new revenue streams for its investors. Besides potentially raising annual registration fees, PIR could censor nonprofit organizations at the request of powerful corporations or governments, or it could collect and monetize web browsing data about the people who visit .ORG websites.
The day before the protest, ICANN and PIR agreed to extend the contractual deadline for ICANN’s review of the sale by nearly a month, until February 17th. Although ICANN initially demanded transparency from PIR; its owner, the Internet Society (ISOC); and Ethos Capital around the details of the sale and the legal framework of PIR’s new for-profit status, very little of this information has been released to the public. ICANN even seems to be ignoring a formal request [.pdf] for information by the Address Supporting Organization, part of the “Empowered Community” that was created to oversee ICANN after its independence from U.S. government control. Despite its initial lack of transparency, ICANN now seems to be feeling pressure from the public not to rubber-stamp the acquisition.
The protest was organized by EFF, NTEN, Fight for the Future, and Demand Progress. Shortly before it started, ICANN staff seemed ready to talk to the protesters, reaching out to the organizers and offering to meet with them in person after the event. The organizers agreed, and suggested ICANN staff and the board join during the protest as well—standing with protesters, if they’d like, or observing, to learn more about the coalition and their concerns. But on the day of the protest, ICANN staff canceled the in-person meeting.
This embed will serve content from youtube.com
As ICANN’s board of directors met inside, EFF’s Elliot Harmon explained to the crowd outside what was at stake: the .ORG ecosystem is "not a product to be sold. It's not this asset that you can let acquire a bunch of value over 16 years and then sell it to a private equity firm. It's something special. It's part of the infrastructure that the global NGO sector relies on.” Supporters joined in chants of “1,2,3,4, profit’s not what .ORG’s for!” and “ICANN, you can stop the sale!” As Amy Sample Ward, CEO of NTEN, said, “This is [ICANN’s] job. This is their responsibility… if we were to make a decision about who could own and manage the .ORG domain that truly had nonprofits and the public's interest at heart it would not be a private equity firm. So we understand the role that ICANN has apparently more than they seem to, and we are calling on them to step in, stop the sale, and to immediately open up a multi-stakeholder process.”
At the end of the rally, surprising the protestors, the entire ICANN board came out to meet them in person. Organizers handed copies of two petitions, signed by 34,000 individuals and over 700 nonprofit organizations, to Board President Maarten Botterman, in a powerful moment that signaled ICANN’s willingness to consider the protesters’ concerns.
Also last week, well-known international NGO’s including Amnesty International, Access Now, and the Sierra Club held a press conference at the World Economic Forum in Davos, Switzerland, to tell world leaders that selling .ORG puts civil society at risk. Numerous recent stories in the press have covered nonprofits’ concerns as well, from the lack of transparency in the process and the failure of ICANN to consider alternatives to the danger the sale could represent to ICANN’s own governance.California Attorney General Asks for Unredacted Financial Info On Sale, Questions ICANN’s Authority
The California Attorney General’s Office has also reached out to ICANN, according to correspondence published on the ICANN website [.pdf], and asked for in-depth information on the sale. Some of its questions overlap with the questions ICANN has asked of PIR. According to ICANN, the Attorney General’s request constitutes an order that overrides confidentiality agreements which previously let ICANN hold back information, and requires them to respond with the confidential documents. On account of that request, ICANN has asked PIR for two more months to review the sale, meaning that the sale cannot be completed before April. In the meantime, the Attorney General’s office will be “analyz[ing] the impact to the nonprofit community, including to ICANN.”
Among the documents requested are not only the financial agreements, meeting minutes, documentation, and correspondence related to the transfer itself, but also:
- Detailed information about the removal of domain price caps, which occurred just months before the sale was announced, and which ICANN, ISOC, and PIR have continuously (and curiously) claimed was unrelated to the sale.
- Detailed information about ICANN staff and ICANN’s conflict-of-interest policy, indicating the Attorney General’s concern that at least some of those involved in the sale are self-dealing.
- Historical information about ICANN’s own authority to manage the top-level domains, which could mean the Attorney General’s office is concerned enough about this transfer to put its trust in ICANN’s governance ability at risk.
We’re glad to see the Attorney General investigating the sale on behalf of nonprofit organizations. In addition to answering the Attorney General, ICANN should also respond to the many questions posed by the nonprofit community itself, many of which overlap. Three big questions the nonprofit community continues to ask of ICANN and PIR: How does Ethos plan on paying back the debt it will accrue in the purchase of PIR, without negatively impacting .ORGs? What “new products and services” does Ethos intend to offer to the .ORG ecosystem that makes this sale necessary? And will those new products and services serve the needs of nonprofits, or exploit them?
People who work on Internet governance issues get nervous when governments throw their weight around, and for good reason: ICANN volunteers have worked hard to keep the domain name system and other parts of the Internet’s governance structure out of government hands. Since 2016, ICANN is no longer formally supervised by the U.S. Department of Commerce, and no national government can dictate policy there, as much as some may want to. Instead of answering to governments, ICANN is supposed to answer to the community of Internet users. ICANN’s independence is an important check against censorship and government surveillance through the DNS. But that independence is fragile. It depends on ICANN maintaining legitimacy through good processes for public input and by being responsive to the concerns of Internet users who are most in need of protection, such as nonprofit users. If ICANN can only give rubber-stamp approval to billion-dollar deals that don’t protect Internet users from surveillance and censorship, then why does ICANN exist?
To avoid government intervention here, and the dangerous precedent it would set, ICANN needs to insist on more transparency around the sale of PIR, and to actively solicit public input through a multi-stakeholder process. Over the last few months, it’s been increasingly obvious that the public needs to be involved. That’s why EFF thanks each of the 34,000 individuals and over 700 organizations who signed a petition to ICANN, all who expressed their fears or requested more information about this sale, and those who helped rally in support of their favorite nonprofits at ICANN. The nonprofit and .ORG community have been united in their concern that this deal presents to civil society since it was announced, and we’re glad to see the Attorney General join us in questioning the value that this sale supposedly brings to the nonprofit ecosystem.
This embed will serve content from youtube.com
The sale of the .ORG registry will impact the nonprofits we all care about. Please take a moment to add your name to the petition demanding a stop to the sale. If you represent an organization that would be affected by the sale, then you can find instructions there for adding your organization’s name to our coalition letter.
Thank you to our friends at NTEN, Fight for the Future, and Demand Progress—and especially to NTEN CEO Amy Sample Ward—for your work in organizing the protest.
STAND UP FOR .ORG
Computer security researchers and journalists play a critical role in uncovering flaws in software and information systems. Their research and reporting allows users to protect themselves, and vendors to repair their products before attackers can exploit security flaws. But all too often, corporations and governments try to silence reporters, and punish the people who expose these flaws to the public.
This dynamic is playing out right now in a court in India, where a company is seeking to block Indian readers from accessing journalism by the American security journalist known as Dissent Doe. If it succeeds, more than a billion people in India would be blocked from reading Dissent Doe’s reporting.
Here’s what happened: last summer, Dissent Doe discovered that an employee wellness company was leaking patients’ private counseling information on the publicly available Web. Dissent alerted the company, called 1to1Help, so that it could secure its patients’ records. After Dissent repeatedly contacted the company, it finally secured the confidential data, a month after Dissent first notified them of the breach.
At that point—once the leak was fixed, and the data was no longer available to malicious actors—Dissent wrote about the breach on the website DataBreach.net, where Dissent reports on significant security flaws.
At first, 1to1Help seems to have recognized the strong public interest in having these types of vulnerabilities exposed. After fixing the breach, the company emailed Dissent to express its thanks for alerting the company, and allowing it to strengthen its data security.
A few weeks later, however, the company took a different tack. It filed a meritless criminal complaint against Dissent in the Bangalore City Civil Court alleging that Dissent “hacked” its patient files—even though the complaint itself acknowledges that the patient files were available to anyone on the public Web, until Dissent alerted the company about this flaw. The criminal complaint also alleges that Dissent’s emails requesting comment for the DataBreach.net story were “blackmail.”
Thankfully, any judgment against Dissent Doe in India would be unenforceable in the United States thanks to the protections of an important law called the Securing the Protection of Our Enduring and Established Constitutional Heritage (SPEECH) Act. Under the SPEECH Act, foreign orders aren’t enforceable in the United States unless they are consistent with the free speech protections that the U.S. and state constitutions guarantee, as well as with state laws.
But the injunction that 1to1Help is asking for would prevent Dissent’s website, DataBreaches.net, from being accessed by anyone in India. And if 1to1Help’s meritless lawsuit succeeds, other companies would surely follow suit in order to block Indians’ access to journalism online.
We hope the court in India decides to adhere to global principles of freedom of speech, and of the press. It should throw this dangerous lawsuit out of court.
The National Football League seems to be gunning for a spot in our Hall of Shame by setting a record for all-time career TDs—no, not touchdowns, but takedowns. We’ve written before about the NFL’s crusade against anyone who dares use the words “Super Bowl” to talk about, well, the Super Bowl.
But the NFL’s trademark bullying doesn’t end there. One of the NFL’s latest victims is Zach Berger, a New Yorker who sells merchandise for frustrated New York Jets fans through a website called Same Old Jets Store. Most of Berger’s products feature a parody version of the Jets’ logo, modified to say “SAME OLD JETS”—a phrase that’s been used for decades to criticize the team’s performance and express fans’ sense of inevitable disappointment. His other products include “MAKE THE JETS GREAT AGAIN” hats and clothing that says “SELL THE TEAM” in a font similar to one used on Jets merchandise.
But if you’re a cynical Jets fan in need of new gear, you’re out of luck for now. Earlier this month, the NFL contacted Shopify, the platform Berger uses for his store, and claimed that every item sold by Same Old Jets Store infringes its trademarks. The NFL didn’t even bother to identify which trademarks were supposedly infringed by which products—its formulaic notice just asserted trademark rights in the names and logos for the NFL and all 32 of its teams, then identified every product listing as “infringing content.” The league’s only explanation for its complaint was an obvious copy-and-paste job that parrots a legal test for trademark infringement, claiming consumers are likely to believe the gear was put out or approved by the NFL.
Seriously? The idea that consumers would think the NFL is selling official merchandise that mocks the Jets (using a phrase the team’s owner has called “disrespectful”), or says “SELL THE TEAM,” is ridiculous. On top of that, Berger’s parodies of the Jets’ logo and merchandise are protected by the First Amendment and trademark’s nominative fair use doctrine, which protects your right to use someone else’s trademark to refer to the trademark owner or its products. (Nominative fair use, by the way, is also why you can call the Super Bowl “the Super Bowl.”)
Disappointingly, Shopify responded to the infringement complaint by taking down Berger’s listings, without questioning the NFL’s absurd claims or giving Berger a chance to respond. Even worse, when Berger contacted Shopify and explained why the NFL’s complaint was baseless, Shopify simply stated that it had forwarded Berger’s message to the NFL and would not restore the listings “until this matter is resolved between the parties.” More than a week later, the NFL has yet to respond—which isn’t surprising, since Shopify already did exactly what the NFL wanted.
It’s disappointing that the NFL continues to use bogus trademark claims to get its way. It’s also disappointing that Shopify seems uninterested in standing up for the rights of its users. We hope that both will come to their senses, and EFF is standing by to intervene if they don’t.
Christian Frank is a freelance IT consultant who was born and raised, and currently resides, in Cologne, Germany. Last year, he did some work protesting the Article 13 demonstrations in Europe, a topic that he remains passionate about, as you’ll see in this interview.
Our conversation gives some perspective as to the differences between German and U.S. views on freedom of expression, particularly when it comes to hate speech. But there’s also a lot of similarities: Christian’s experiences growing up in Germany during the split between East and West, with parents who experienced World War II, have shaped his views about who should—and shouldn’t—regulate what we can and cannot say.
We also discussed the promise of social media, and the internet as—to use Christian’s words—“another living space” that we need to keep fighting to protect.
Jillian C. York: Thanks for joining me today, Christian. Let me start with a basic question: What does free speech mean to you?
Free speech to me means that I can speak my mind without fearing for reprisals. It doesn’t mean that I can berate or hate on people, but it means that I can speak my mind without fear.
York: We’re at a difficult moment right now when it comes to a variety of different online speech. What do you think we should be doing when it comes to speech online?
For me, there is a distinction between free speech and hate speech. As long as I speak my mind, I don’t think we should do anything. If I start hating on other people, I would expect someone to stop me. I think there’s a different. There’s this old saying from Rosa Luxembourg: “Your freedom ends where the other’s begins.” That’s kind of my take on it.
I would object to any kind of censorship, but free speech is not an excuse for hate speech. At least that’s how I see it.
York: I’m curious then...you said you’re against censorship. Do you think that some limitations on even hateful speech could be censorship?
That’s probably the most difficult question, and I don’t have an answer. From where I grew up, in Germany, we’ve always had a limit on free speech, and that limit is that you’re not allowed to speak in favor of the Nazis. I think that’s quite okay; I would not consider that censorship, but that’s also because of my cultural background and the fact that it’s always been that way.
I don’t think that it’s curbing your free speech if you’re required to use some civility and not spread hate that way.
York: From where I sit, one of the concerns that I have is the question of who gets to decide what is hate speech, or who is a terrorist, and so on. What do you think about the current state of play on the internet in terms of how much control corporations have over the governance of speech?
Well, at the moment, corporations have all control, right? There’s no governance body outside of the corporations. And I personally think the corporations, because they’re not bound by ideology, are actually doing a reasonable job. I would much rather have the corporations continue than to introduce a state-run body. That’s open for misuse.
The generation before mine perfected this, and I don’t want that again. I don’t want states to control free speech. Corporations aren’t necessarily evil, they just want to make money.
York: Huh, I feel like that position really must depend on which state one comes from.
Well, both my parents were in the last war, so I have direct connections, so to speak. The other half of Germany was a dictatorship until 1989, and we all know what the state can do to free speech. Censorship is a really powerful instrument, I think, and I would not give this amount of control to a government. Governments can change, democracies can go away. Maybe we could introduce some kind of neutral body, but I don’t know.
Don’t get me wrong, I wouldn’t trust Mark Zuckerberg as a person, but even Facebook is doing a good job to protect their ad revenues, which is a much better driver than ideology.
York: Is there a personal story you’d be willing to share that helped shape your views on speech?
Well, for me, that story is visits to the GDR. In the GDR, they had a very strong secret service, the Stasi, so whatever you were saying, even in private settings, you had to be very careful about what you said. And we, as Westerners, were under double scrutiny. That was not free at all, it was very oppressing.
I never had any personal consequences, but every time I was in East Berlin, I was very very careful. As a West German citizen, I had to cross over back from the east at 10pm, so even if you were with friends at 9:30, you had to rush, since it was a nightmare if you didn’t make it. I remember at midnight, the American broadcasting station in Berlin would be tolling the Liberty Bell, and I would sigh and think “I’m home.”
But—I grew up white and male, I’m absolutely privileged. In my daily life in West Germany, I never had an issue.
York: Still, it’s an important perspective. Many people who grow up with some sort of privilege never get to see these things firsthand.
So let me ask this: What advice would you give to the next generation, the younger generation? I worry sometimes that younger folks don’t care that much about free speech anymore.
May I digress? I’d actually like to object to what you’re saying, because from my view, Generation Z is so political...at least the people that I see. Around Article 13, they were brilliant. It was so much fun to go to the streets with them. When I was young, after the ‘68 student revolts, we were very political. We’d march against nuclear energy, missiles. Then, after us, nothing happened, there were a lot of apolitical generations after us. But this current generation is so political! I was happy to be able to support them with Article 13, and now I’m happy to be able to support them on the climate strike, doing grown-up stuff for them.
So, to Generation Z, the only thing I can say is: Don’t give up. Go on. You’re already on the right path.
I think this generation values speech very highly. If you hear them talk about climate justice, it has a very social and very liberalist aspect. So, like I said, with the current generation, I’m not worried at all.
York: I love that! Okay, then I guess I want to also ask you a little more about Article 13. It’s an unfortunate and interesting time when it comes to copyright. A lot of people don’t see the right to share, the right to remix, as a speech issue. So, how do you frame it as a free speech issue?
What I see the big music labels and publishers do is try to protect their revenue stream. I don’t think it’s really an issue of free speech or not [for them], they’re just trying to protect their revenue stream. What they ignore completely, is that all the provisions in Article 13 would close social media.
For me, it’s not as much about free speech—even if it comes into full effect, I can say what I want, I just can’t share what I want. From my point of view, it’ll kill social media, it’ll kill off the environment that my kids grew up in, which I’m pretty mad about—they are too—but I can still put myself out in front of a white wall without music and say “our government is shite.” That Article 13 won’t prohibit. It’ll completely cut down my reach, but it won’t disallow me to speak my mind. It’ll kill off social media, all the discussions, the communication, the conversation. It’s absolutely awful.
Now, here, it’s taken a backseat to climate issues, but we still have to fight it. It’s not over. There’s still hope.
York: Yes, I agree, there’s still hope. Well, I think we’ve covered most of my questions, but: Are there any other thoughts you want to add?
What really worries me is the right-wing groups putting out their hate under the cover of free speech. There’s a lot of servers—not just 4chan or 8chan, but even on diaspora*, where a “free speech” server will play host to hate. They’ve kind of occupied the term, and this is enabled by the US president, who would say that if you don’t publish these nutters, you’re against free speech.
York: So the term has been co-opted?
Yes. And that’s something that we, “liberal-minded intellectuals”, need to think about. Free speech is important, it’s a basic human right. If you can’t speak your mind, you can’t think. It’s a massive violation of your basic human rights. But still, I don’t want to give it to the right-wing people and have them spout their fascism, which is not free speech.
York: Well, it is in the US.
I know, I know. I can only speak from my point of view, and yes, I’ve spent most of my time in Europe. I think what we need to do is take back control of free speech as a fundamental liberal value. Free speech is ours.
York: I’m curious what you think about the idea that fascists will always find their way around censorship. You can ban the swastika, for example, but they’ll just use other symbols to identify themselves.
Well, I grew up in a country where the swastika is banned, and I think it’s a good thing. Yes, they can come up with other symbols, but you can ban those too. I don’t see this as a problem. But that’s my background—I grew up with the understanding that the limit to free speech is the Third Reich, and none of that is allowed.
Banning symbols is one thing, but hate speech or inciting violence is another thing. If they start using code words, the mass appeal is gone. They can’t reach as many people if they’re not speaking plain language. If someone says “kill the Jews,” I think it’s right for them to be banned. I don’t see that as censorship, personally.
York: Well, that's possibly incitement. But social media platforms have also co-opted “hate speech” as a term. On Facebook, for example, “kill all men” is hate speech, but Holocaust denial isn’t. That doesn’t make sense to a lot of people.
I mean yes, that is entirely wrong. That’s not how it should be. But that’s probably close to Mark Zuckerberg’s beliefs. He doesn’t strike me as very liberal. But we can always go to Twitter.
The point is, social media is run by corporations, and run for profit. So we cannot expect a company to provide a platform that adheres to the Universal Declaration of Human Rights (UDHR). It’s theirs, it’s not mine. I use it, but it’s theirs and they can do what they please. It’s a corporation, not a state, not people.
The difference is if I go to a community-run diaspora* server and I get banned, I can appeal, and it works. But if I go to Mark’s student project [Facebook] and he doesn’t like me saying things, well...it’s his server. I’m not even a customer, which would give me certain rights, I’m just using it. If we’re in a commercial space, can we really argue with fundamental rights? I don’t know!
York: [laughing] That’s what I’ve been doing for the past decade, so we certainly can!
Right, right. But if I go to national German radio, for example, I can force them to adhere to the UDHR. Can I force a corporation? I don’t know.
York: Right, I understand. It’s difficult to find a way to hold companies accountable to a human rights framework.
Obviously, I would fully support it, but now, the question is: Who can hold them accountable? They’re transnationals. The only body that could reasonably hold them accountable would be the United Nations. And I’m fully for it, the more we can force them to adhere to basic civility and liberty, I’m for it. I’m just thinking aloud: Where’s the body? Who can actually go to a corporation and say “you’re misbehaving”? And it means different things in Germany, or Saudi Arabia, or Hong Kong. So the only universal body would be the United Nations, and I would be all for it if they came up with a framework for social media.
York: I think that we’re getting to that point.
And yes, I think that would be the right body. It’s not up to the German government to make rules for Facebook. Even the EU is too small, in my opinion. The one thing that I don’t want to go back to is regional networks. Having a truly global network like social media—that’s something that I would like to protect. I don’t want it to be like, “In India you can say this, in Germany you can say this,” and if you want to talk across borders, you have to use your passport.
I lived through the transition of having to use a passport to go to the Netherlands to having free travel. And we also have free travel on the internet. If I want to talk to someone in Yemen, I can, provided we speak the same language of course. So that is, for me, also a huge freedom that social media has brought us. The internet has always had it, but social media for me made the internet accessible, and now I can chat with, say, Japanese guys.
York: Yeah, it’s easy sometimes for me to forget that the internet is still providing such an incredible way for people to meet across boundaries.
Yes, there’s a huge freedom in social media that we need to protect, that I would like to see protected. I have “internet friends” all over the world, and the only rules I need to obey are, for example, YouTube’s community guidelines. Okay.
So the question is: Is YouTube the right person to set those guidelines? Eh, maybe not, but who else? Most definitely not my government, I don’t want them to interfere at all.
The internet, to me, is like another living space, a second world. It needs protection. I’m always so fascinated to see the way my kids grew up—at 12, they had more connection to the world than I had at 30.
York: Indeed! Thank you so much Christian, I learned a lot.
Last week, Sens. Ron Wyden (D–Oregon) and Steve Daines (R–Montana) along with Reps. Zoe Lofgren (D–California), Warren Davidson (R–Ohio), and Pramila Jayapal (D–Washington) introduced the Safeguarding Americans’ Private Records Act (SAPRA), H.R 5675. This bipartisan legislation includes significant reforms to the government’s foreign intelligence surveillance authorities, including Section 215 of the Patriot Act. Section 215 of the PATRIOT Act allows the government to obtain a secret court order requiring third parties, such as telephone providers, Internet providers, and financial institutions, to hand over business records or any other “tangible thing” deemed “relevant” to an international terrorism, counterespionage, or foreign intelligence investigation. If Congress does not act, Section 215 is set to expire on March 15.
The bill comes at a moment of renewed scrutiny of the government’s use of the Foreign Intelligence Surveillance Act (FISA). A report from the Department of Justice’s Office of the Inspector General released late last year found significant problems in the government’s handling of surveillance of Carter Page, one of President Trump’s former campaign advisors. This renewed bipartisan interest in FISA transparency and accountability—in combination with the March 15 sunset of Section 215—provides strong incentives for Congress to enact meaningful reform of an all-too secretive and invasive surveillance apparatus.
Congress passed the 2015 USA FREEDOM Act in direct response to revelations that the National Security Agency (NSA) had abused Section 215 to conduct a dragnet surveillance program that siphoned up the records of millions of American’s telephone calls. USA FREEDOM was intended to end bulk and indiscriminate collection using Section 215. It also included important transparency provisions aimed at preventing future surveillance abuses, which are often premised on dubious and one-sided legal arguments made by the intelligence community and adopted by the Foreign Intelligence Surveillance Court (FISC)—the federal court charged with overseeing much of the government’s foreign intelligence surveillance.
Unfortunately, government disclosures made since USA FREEDOM suggest that the law has not fully succeeded in limiting large-scale surveillance or achieved all of its transparency objectives. While SAPRA, the newest reform bill, does not include all of the improvements we’d like to see, it is a strong bill that would build on the progress made in USA FREEDOM. Here are some of the highlights:Ending the Call Detail Records Program
After it was revealed that the NSA relied on Section 215 to collect information on the phone calls of millions of Americans, the USA Freedom Act limited the scope of the government’s authority to prospectively collect these records. But even the more limited Call Detail Records (CDR) program authorized in USA Freedom was later revealed to have collected records outside of its legislative authority. And last year, due to significant “technical irregularities” and other issues, the NSA announced it was shutting down the CDR program entirely. Nevertheless, the Trump administration asked Congress to renew the CDR authority indefinitely.
SAPRA, however, would make the much-needed reform of entirely removing the CDR authority and clarifying that Section 215 cannot be used to collect any type of records on an ongoing basis. Ending the authority of the CDR program is a necessary conclusion to a program that could not stay within the law and has already reportedly been discontinued. The bill also includes several amendments intended to prevent the government from using Section 215 for indiscriminate collection of other records.More Transparency into Secret Court Opinions
USA FREEDOM included a landmark provision that required declassification of significant FISC opinions. The language of the law clearly required declassification of all significant opinions, including those issued before the passage of USA Freedom in 2015. However, the government read the law differently: it believed it was only required to declassify significant FISC opinions issued after USA Freedom was passed. This crabbed reading of USA Freedom left classified nearly forty years of significant decisions outlining the scope of the government’s authority under FISA—a result clearly at odds with USA Freedom’s purpose to end secret surveillance law. We are pleased to see that this bill clarifies that all significant FISC opinions, no matter when they were written, must be declassified and released. It also requires that future opinions be released within six months of the date of decision.“Tangible Things” and the impact of Carpenter v. United States
As written, Section 215 allows the government to collect “any tangible thing” if it shows there are “reasonable grounds” to believe those tangible things are “relevant” to a foreign intelligence investigation. This is a much lower standard than a warrant, and we’ve long been concerned that an ambiguous term like “tangible things” could be secretly interpreted to obtain sensitive personal information. We know, for example, that previous requests under Section 215 included cell site location information, which can be used for invasive tracking of individuals’ movements. But the landmark 2018 Supreme Court decision in Carpenter v. United States clarified that individuals maintain a Fourth Amendment expectation of privacy in location data held by third parties, thus requiring a warrant for the government to collect it. Following questioning by Senator Wyden, the intelligence community stated it no longer used Section 215 to collect location data but admitted it hadn’t analyzed how Carpenter applied to Section 215. SAPRA addresses these developments by clarifying that the government cannot warrantlessly collect GPS or cell site location information. It also forbids the government from using Section 215 to collect web browsing or search history, and anything that would “otherwise require a warrant” in criminal investigations.
These are important limitations, but more clarification is still needed. Decisions like Carpenter are relatively rare. Even if several lower courts held that collecting a specific category of information requires a warrant, we're concerned that the government might argue that this provision isn’t triggered until the Supreme Court says so. That’s why we’d like to see the law be even clearer about the types of information that are outside of Section 215’s authority. We also want to extend some of USA’s Freedom’s limitations on the scope of collection. Specifically, we’d like to see tighter limits on the that the government have a “specific selection term” for the collection of “tangible things.”Expanding the Role of the FISC Amicus
One of the key improvements in USA Freedom was a requirement that the FISC appoint an amicus to provide the court with a perspective independent of the government’s in cases raising novel or significant legal issues. Over time, however, we’ve learned that the amici appointed by the court have faced various obstacles in their ability to make the strongest case, including lack of access to materials relied on by the government. SAPRA includes helpful reforms to grant amici access to the full range of these materials and to allow them to recommend appeal to the FISA Court of Review and the Supreme Court.Reporting
USA Freedom requires the intelligence community to publish annual transparency reports detailing the types of surveillance orders it seeks and the numbers of individuals and records affected by this surveillance, but there have been worrying gaps in these reports. A long-standing priority of the civil liberties community has been increased accounting of Americans whose records are collected and searched using warrantless forms of foreign intelligence surveillance, including Section 215 and Section 702. The FBI in particular has refused to count the number of searches of Section 702 databases it conducts using Americans’ personal information, leading to a recent excoriation by the FISC. SAPRA requires that the transparency reports include the number of Americans whose records are collected under 215, as well as the number of US person searches the government does of data collected under Sections 215 and 702.Notice and Disclosure of Surveillance to Criminal Defendants
Perhaps the most significant reform needed to the government’s foreign intelligence surveillance authority as a whole is the way in which it uses this surveillance to pursue criminal cases.
There are two related issues: government notice to defendants that they were surveilled, and government disclosure to the defense of the surveillance applications. Under so-called “traditional” FISA—targeted surveillance conducted pursuant to a warrant-like process—defendants are supposed to be notified when the government intends to use evidence derived from the surveillance against them. The same is true of warrantless surveillance conducted under Section 702, but we’ve learned that for years the government did not notify defendants as required. This lack of transparency denied defendants basic due process. Meanwhile, the government currently has no obligation to notify defendants whose information was collected under Section 215.
SAPRA partially addresses these problems. First, it requires notification to defendants in cases involving information obtained through Section 215. Second, and more generally, it clarifies that notice to defendants is required whenever the government uses evidence that it would not have otherwise learned had it not used FISA.
But this only addresses half of the problem. Even if a criminal defendant receives notice that FISA surveillance was used, that notice is largely meaningless unless the defendant can see—and then directly challenge—the surveillance that led to the charges. This has been one of EFF’s major priorities when it comes to fighting for FISA reform, and we think any bill that tackles FISA reform in addition to addressing Section 215 should make these changes as well.
FISA sets up a mechanism through which lawyers for defendants who are notified of surveillance can seek disclosure of the underlying surveillance materials relied on by the government. Disclosure of this sort is both required and routine in traditional criminal cases. It is crucial to test the strength of the government’s case and to effectively point out any violations of the Fourth Amendment or other constitutional rights. But in the FISA context, despite the existence of a disclosure mechanism, it has been completely toothless; the history of the law, no defendant has ever successfully obtained disclosure of surveillance materials.
The investigation into surveillance of Carter Page demonstrates why this is a fundamental problem. The Inspector General found numerous defects in the government’s surveillance applications—defects that, had Carter Page been prosecuted, might have led to the suppression of that information in a criminal case against him. But, under the current system, Page and his lawyers never would have seen the applications. And, the government might have been able to obtain a conviction based on potentially illegal and unconstitutional surveillance.
It’s important for Congress to take this opportunity to codify additional due process protections. It’s a miscarriage of justice if a person can be convicted on unlawfully acquired evidence, yet can’t challenge the legality of the surveillance in the first place. Attorneys for defendants in these cases need access to the surveillance materials—it’s a fundamental issue of due process. Unfortunately, SAPRA does not include any reforms to the disclosure provision of FISA. We look forward to working with Congress to ensure that the final FISA reform bill tackles this issue of disclosure.
In 2015, USA FREEDOM was a good first step in restoring privacy protections and creating necessary oversight and transparency into secret government surveillance programs. But in light of subsequent evidence, it’s clear that much more needs to be done. Though we would like to see a few improvements, SAPRA is a strong bill that includes many necessary reforms. We look forward to working with lawmakers to ensure that these and other provisions are enacted into law before March 15.