Electronic Freedom Foundation

Hearing Wednesday: Can Criminal Defendants Review DNA Analysis Software Used to Prosecute Them?

EFF - 3 hours 42 min ago
California Appeals Court to Hear Arguments on Defense Review of DNA Analysis System

Fresno – On Wednesday, May 22, at 9 am, the Electronic Frontier Foundation (EFF) will argue that criminal defendants have a right to review and evaluate the source code of forensic DNA analysis software programs used to create evidence against them. The case, California v. Johnson, is on appeal to a California appeals court.

In Johnson, the defendant was allegedly linked to a series of crimes by a software program called TrueAllele, used to evaluate complex mixtures of DNA samples from multiple people. As part of his defense, Johnson wants his defense team to examine the source code to see exactly how TrueAllele estimates whether a person’s DNA is likely to have contributed to a mixture, including whether the code works in practice as it has been described. However, prosecutors and the manufacturers of TrueAllele claim that the source code is a trade secret and that the commercial interest in secrecy should prevent a defendant from reviewing the source code—even though the defense has offered to follow regular procedure and agree to a court order not to disclose the code beyond the defense team.

EFF is participating in Johnson as amicus, and has pointed out that at least two other DNA matching programs have been found to have serious source code errors that could lead to false convictions. In court Wednesday, EFF Senior Staff Attorney Kit Walsh will argue that Johnson has a constitutionally-protected right to inspect and challenge the evidence used to prosecute him—and that extends to the source code of the forensic software.

WHAT:
California v. Johnson

WHO:
EFF Senior Staff Attorney Kit Walsh

WHEN:
Wednesday, May 22
9 am

WHERE:
Fifth District Court of Appeal
2424 Ventura Street
Fresno, California, 93721

For more on this case:
https://www.eff.org/cases/california-v-johnson

TOSsed Out: Highlighting the Effects of Content Rules Online

EFF - 4 hours 12 min ago

Today we are launching TOSsed Out, a new iteration of EFF’s longstanding work in tracking and documenting the ways that Terms of Service (TOS) and other speech moderating rules are unevenly and unthinkingly applied to people by online services. As a result of these practices, posts are deleted and accounts banned, harming those for whom the Internet is an irreplaceable forum to express ideas, connect with others, and find support.

TOSsed Out continues in the vein of Onlinecensorship.org, which EFF launched in 2014 to collect reports from users in an effort to encourage social media companies to operate with greater transparency and accountability as they regulate speech. TOSsed Out will highlight the myriad ways that all kinds of people are negatively affected by these rules and their uneven enforcement.

Last week the White House launched a tool for people to report incidents of “censorship” on social media, following the President’s repeated allegations of a bias against conservatives in how these companies apply their rules. In reality, commercial content moderation practices negatively affect all kinds of people, especially people who already face marginalization. We’ve seen everything from Black women flagged for sharing their experiences of racism to sex educators whose content is deemed too risqué. TOSsed Out will show that trying to censor social media at scale ends up removing legal, protected speech that should be allowed on platforms

TOSsed Out’s debut today is the result of brainstorming, research, design, and writing work that began in late 2018 after we saw an uptick in takedowns resulting from increased public and government pressure, as well as the rise in automated tools. A diverse group of entries are being published today, including a Twitter account parodying Beto O’Rourke being deemed as “confusing” or “deceptive,” a gallery focused on creating awareness of diversity of women’s bodies, a Black Lives Matter-themed concert, and an archive aimed at documenting human rights violations.

These examples, and the ones added in the future, make clear the need for companies to embrace the Santa Clara Principles. We helped create the Principles to establish a human rights framework for online speech moderation, require transparency about content removal, and specify appeals processes to help users get their content back online. We call on companies to make that commitment now, rather than later.

People rely on Internet platforms to share experiences and build communities, and not everyone has good alternatives to speak out or stay in touch when a tech company censors or bans them. Rules need to be clear, processes need to be transparent, and appeals need to be accessible.

TOSsed Out Entries Launched Today:

EFF Project Shows How People Are Unfairly “TOSsed Out” By Platforms’ Absurd Enforcement of Content Rules

EFF - 5 hours 14 min ago
Users Without Resources to Fight Back Are Most Affected by Unevenly-Enforced Rules

San Francisco—The Electronic Frontier Foundation (EFF) today launched TOSsed Out, a project to highlight the vast spectrum of people silenced by social media platforms that inconsistently and erroneously apply terms of service (TOS) rules.

TOSsed Out will track and publicize the ways in which TOS and other speech moderation rules are unevenly enforced, with little to no transparency, against a range people for whom the Internet is an irreplaceable forum to express ideas, connect with others, and find support.

This includes people on the margins who question authority, criticize the powerful, educate, and call attention to discrimination. The project is a continuation of work EFF began five years ago when it launched Onlinecensorship.org to collect speech takedown reports from users.

“Last week the White House launched a tool to report take downs, following the president’s repeated allegations that conservatives are being censored on social media,” said Jillian York, EFF Director for International Freedom of Expression. “But in reality, commercial content moderation practices negatively affect all kinds of people with all kinds of political views. Black women get flagged for posting hate speech when they share experiences of racism. Sex educators’ content is removed because it was deemed too risqué. TOSsed Out will show that trying to censor social media at scale ends up removing far too much legal, protected speech that should be allowed on platforms.”

EFF conceived TOSsed Out in late 2018 after seeing more takedowns resulting from increased public and government pressure to deal with objectionable content, as well as the rise in automated tools. While calls for censorship abound, TOSsed Out aims to demonstrate how difficult it is for platforms to get it right. Platform rules—either through automation or human moderators—unfairly ban many people who don’t deserve it and disproportionately impact those with insufficient resources to easily move to other mediums to speak out, express their ideas, and build a community.

EFF is launching TOSsed Out with several examples of TOS enforcement gone wrong, and invites visitors to the site to submit more. In one example, a reverend couldn’t initially promote a Black Lives Matter-themed concert on Facebook, eventually discovering that using the words “Black Lives Matter” required additional review. Other examples include queer sex education videos being removed and automated filters on Tumblr flagging a law professor’s black and white drawings of design patents as adult content. Political speech is also impacted; one case highlights the removal of a parody account lampooning presidential candidate Beto O’Rourke.

“The current debates and complaints too often center on people with huge followings getting kicked off of social media because of their political ideologies. This threatens to miss the bigger problem. TOS enforcement by corporate gatekeepers far more often hits people without the resources and networks to fight back to regain their voice online,” said EFF Policy Analyst Katharine Trendacosta. “Platforms over-filter in response to pressure to weed out objectionable content, and a broad range of people at the margins are paying the price. With TOSsed Out, we seek to put pressure on those platforms to take a closer look at who is being actually hurt by their speech moderation rules, instead of just responding to the headline of the day.”

Contact:  Jillian C.YorkDirector for International Freedom of Expressionjillian@eff.org KatharineTrendacostaPolicy Analystkatharine@eff.org

Senator Wyden Leads on Securing Elections Before 2020

EFF - Fri, 05/17/2019 - 3:32pm

Sen. Ron Wyden’s new proposal to protect the integrity of U.S. elections, the Protecting American Votes and Elections (PAVE) Act of 2019, takes a much needed step forward by requiring a return to paper ballots.

The bill forcefully addresses a grave threat to American democracy—outdated election technologies used in polling places all over the country that run the risk of recording inaccurate votes or even allowing outside actors to maliciously interfere with the votes that individuals cast.

The simple solution: paper ballots and audits of paper ballots. EFF along with security experts have long-supported this approach—arguing that the gold standard for security of our election infrastructure is paper ballots that are backed by risk-limiting audits (an audit that statistically determines how many votes need to be recounted in order to confirm an election result). As Sen. Kamala Harris, one of the fourteen co-sponsors the bill, recently said, “Russia can’t hack a piece of paper.”

The last two decades have shown that the touchscreen and other machines used at polling places to cast votes are not only susceptible to tampering, but also that the outdated software and poorly configured settings lead to countless problems like inaccurately recording votes or large drop-offs on down-ballot races.

Sen. Wyden’s bill sets out necessary steps to make sure that state and local governments can respond to the election security concerns raised by experts:

  • It requires paper ballots and risk-limiting audits in federal elections.
  • It allocates $500 million dollars to states to buy secure machines that can scan paper ballots.
  • It allocates an additional $250 million dollars to states to buy ballot-marking devices to be used by voters with disabilities or who are face language barriers.
  • It bans voting machines from connecting to the Internet.
  • It gives the Department of Homeland Security the authority to set mandatory national minimum cybersecurity standards for voting machines, voter registration databases, electronic poll books, and election reporting websites.
  • It empowers ordinary voters to enforce these critical safeguards with a private right of action.

The PAVE Act is supported by a large coalition of senators and a companion bill has been introduced in the House of Representatives. The foreign interference in the 2016 election stands to be repeated in 2020 if Congress does not act now to address the numerous concerns with the integrity of our voting system repeatedly identified by the information security community.

EFF Files Freedom of Information (FOIA) Request for Submissions to the White House’s Platform Moderation Tool

EFF - Fri, 05/17/2019 - 3:23pm

When social media platforms enforce their content moderation rules unfairly, it affects everyone’s ability to speak out online. Unfair and inconsistent online censorship magnifies existing power imbalances, giving people who already have the least power in society fewer places where they are allowed a voice online.

President Donald Trump and some members of Congress have complained on Twitter that the people most silenced online are those who share the President’s political views. So the White House launched this week a website inviting people to report examples of being banned online because of political bias.

The website leads users through a series of forms seeking their names, citizenship status, email address, social media handles, and examples of the censorship they encountered. People are required to accept a user agreement that permits the U.S. government to use the information in unspecified ways and edit the submissions, raising a natural concern that any political operation will selectively disclose the results.

For this reason, we have sent a FOIA request to the Executive Office of the President, Office of Science and Technology, for all submissions received. And because we are concerned that the White House's website asks for a lot of personal information without sufficient privacy protections, we have asked that first name, last name, email address, phone number and citizenship status be redacted.

It’s troubling to see people being asked whether they are a U.S. citizen or a permanent resident in light of the administration’s hard line immigration policies. It raises legitimate questions about how that and other required personal information may be used and who the government really wants to hear from.

There’s no question that social media platforms are failing at speech moderation—that’s why EFF has been monitoring their practices for years and working with other civil society organizations to establish guidelines to encourage fairness and transparency. Starting in 2014 EFF has been collecting stories of people censored by social media at Onlinecensorship.org. The reports we've received over the years indicate that all kinds of groups and individuals experience censorship on platforms, and that marginalized groups around the world with fewer traditional outlets and resources to speak out are disproportionately affected by flawed speech moderation rules.

While we agree that mapping online censorship is important, we should remember that surveys like these will, at best, give an incomplete picture of social media companies' practices. And let’s be clear: the answer to bad content moderation isn’t to empower the government to enforce moderation practices.

Rather, social media platform owners must commit to real transparency about what speech they are removing, under what rules, and at whose behest. They should adopt moderation frameworks, like the Santa Clara Principles, that are consistent with human rights, with clear take down rules, fair and transparent removal processes, and mechanisms for users to appeal take down decisions.

 

 

 

 

California Now Classifies Immigration Enforcement as “Misuse” of Statewide Law Enforcement Network

EFF - Fri, 05/17/2019 - 12:51pm

It has taken more than a year, but the California Attorney General’s Office has implemented steps to protect immigrants from U.S. Immigration and Customs Enforcement (ICE) and other agencies that abuse the state’s public safety network, the California Law Enforcement Telecommunications System (CLETS).

Following calls for reform from EFF and immigrant rights advocacy groups, the Attorney General issued new protocols that say if an agency accesses anything other than criminal history records on CLETS for immigration enforcement purposes, it may be treated as "system misuse," and the agency or individual could be subject to sanctions.

In October 2017, in response to threats of a mass deportation campaign by the Trump administration, the California legislature passed S.B. 54, also known as the California Values Act. More colloquially referred to as the “sanctuary state” law, S.B. 54 restricts state and local law enforcement from using resources to assist with immigration enforcement. Among these measures, the legislature prohibited agencies from providing personal information (such as a home or work address) for the purposes of enforcing immigration laws. The legislation also requires the Attorney General to “publish guidance, audit criteria, and training recommendations” to ensure to the “fullest extent practicable” that state and local law enforcement databases are not used to enforce immigration laws.

For years, EFF has been investigating abuse of CLETS, a sprawling computer network that connects law enforcement agencies across the state to a number of state databases, including driver license and vehicle registration information maintained by the California Department of Motor Vehicles. Often these cases of misuse have involved police accessing the system to spy on former or potential romantic partners, but police have also used the system to snoop on critical officials or to improperly check whether students live within a particular school district. CLETS users are not just local cops: a wide range of federal agencies, including many field offices within the U.S. Department of Homeland Security, have access to CLETS.

Following the passage of S.B. 54, EFF told the California Attorney General’s CLETS Advisory Committee that accessing CLETS (aside from criminal history information, which is exempt due to federal law) for immigration enforcement should be formally classified by as misuse of the system. Such misuse can result in sanctions ranging from a letter of censure to cutting off access to CLETS to criminal prosecution. EFF repeated this call in a series of recommendations sent to the Attorney General’s office co-authored by the ACLU, the National Immigration Law Center, the Urban Peace Institute, Advancing Justice – Asian Law Caucus, and more than a dozen other stakeholders.

The Attorney General’s office listened. According to the latest set of “Policies, Practices and Procedures,” which state law requires agencies to follow if they access CLETS:

[F]ederal, state or local law enforcement agencies shall not use any non-criminal history information contained within these databases for immigration enforcement purposes. “Immigration enforcement” includes any and all efforts to investigate, enforce, or assist in the investigation or enforcement of any federal civil immigration law, and also includes any and all efforts to investigate, enforce, or assist in the investigation or enforcement of any federal criminal immigration law that penalizes a person’s presence in, entry, or reentry to, or employment in, the United States.

The new policies for system misuse also clearly define accessing non-criminal data for immigration enforcement as “prohibited/unauthorized” use, which can result in either the suspension or removal of the agency or individual’s access to CLETS. State and local law enforcement agencies are advised to update their own governance policies and alert users that CLETS, or other databases associated with CLETS, should not be used for immigration enforcement.

While this change is a major step forward, it does not settle all our concerns. The Attorney General leaves it to the agencies to investigate their own cases of misuse, as long as they report the results of those investigations back to the Attorney General. We are skeptical that ICE or affiliated agencies will investigate themselves for using CLETS to support their deportation efforts. We urge the Attorney General to enact a more rigorous oversight process for federal agencies.

CLETS is one significant part of a sprawling web of state and local databases accessed by federal agencies. To further protect immigrants from ICE’s abuses, we urge the California state legislature to pass legislation prohibiting the access of all state and local databases for immigration enforcement.

Supreme Court Extends Antitrust Protections to App Store Customers

EFF - Thu, 05/16/2019 - 6:10pm

On Monday, the U.S. Supreme Court ruled that consumers who buy apps through the Apple app store are direct purchasers and may seek antitrust relief under well-settled law. On the one hand, the decision carries major implications not only for Apple, but also for other companies that host app stores, and antitrust law in other contexts. On the other hand, it represents not a new jurisprudential doctrine, so much as the application of well-established law to a novel factual context presented by app stores.

Monday’s decision affirmed the Illinois Brick framework under federal law, while announcing that consumers who purchase apps from app stores are in fact direct purchasers for the purposes of antitrust analysis, not indirect purchasers as Apple had claimed. As a result, the suit against Apple can continue.

Since 1977, courts have deemed suits by indirect purchasers to be outside the bounds of antitrust law. In Illinois Brick Co. v. Illinois, the Supreme Court heard claims from the state of Illinois, which alleged harms from a price-fixing conspiracy among brick manufacturers who sold to masonry contractors with whom, in turn, the state government contracted for new construction projects. The Court held that only the masonry contractors enjoyed a right to challenge the alleged price-fixing conspiracy, whereas an indirect purchaser of downstream products (in that case, the state government plaintiff) did not have standing to bring a lawsuit.

The Illinois Brick rule has often been used to shut the courthouse doors to everyday consumers, preventing them from challenging anti-competitive conduct. Often, the intermediate purchasers, like the contractors in Illinois Brick, are less motivated to bring a lawsuit, and the price-fixing or other illegal conduct goes unchallenged. Many state courts and legislatures have ruled that the Illinois Brick doctrine doesn’t apply to their state’s own antitrust laws.

Monday’s decision affirmed the Illinois Brick framework under federal law, while announcing that consumers who purchase apps from app stores are in fact direct purchasers for the purposes of antitrust analysis, not indirect purchasers as Apple had claimed. As a result, the suit against Apple can continue.

Software developers who wish to sell apps in the Apple App Store can set their own prices, but Apple charges a 30 percent fee on all sales. The plaintiffs argued that Apple’s fees make apps more expensive than they would be in a competitive—rather than exclusive—marketplace. The justices didn’t rule on whether Apple’s conduct violated the antitrust laws, but they ruled that the plaintiffs are direct purchasers, and that their suit should proceed in the district court.

The Court’s 5-4 opinion was written by new Justice Brett Kavanaugh, who joined the Court’s moderate justices, rather than the conservative colleagues with whom he usually votes.

If the courts go on to rule that Apple’s 30% commission on app sales violates the antitrust laws, it could open the door for more third-party software marketplaces for mobile applications. It could also lead to more choices of app markets in the Android world. Many app developers (including EFF) and users have chafed under the seemingly arbitrary restrictions imposed by app stores, so this case could have wide-ranging effects.

We are glad to see the Supreme Court empower antitrust enforcement, especially given the general trend over the past generation towards limiting antitrust law. While the Apple case remains unresolved and will continue, Monday’s decision ensures that iOS users will enjoy access to the courts, and that companies crafting platforms can’t abuse monopoly power to extract higher returns while enjoying immunity from suits by consumers.

What You Need to Know About the Latest WhatsApp Vulnerability

EFF - Thu, 05/16/2019 - 12:40pm

If you are one of WhatsApp’s billion-plus users, you may have read that on Monday the company announced that it had found a vulnerability. This vulnerability allowed an attacker to remotely upload malicious code onto a phone by sending packets of data that look like phone calls from a number not in your contacts list. These repeated calls then cause WhatsApp to crash. This is a particularly scary vulnerability because the does not require that the user pick up the phone, click a link, enter their login credentials, or interact in any way.

Fortunately, the company fixed the vulnerability on the server side over the weekend and rolled out a patch for the client side on Monday.

What does that mean for you? First and foremost, it means today is a good day to make sure that you are running the latest version of WhatsApp. Until you update your software, your phone may still be vulnerable to this exploit.

Are you likely to have been targeted by this exploit? Facebook (which owns WhatsApp) has not indicated that they know how many people have been targeted by this vulnerability, but they have attributed its use to an Israeli security company, NSO Group, which has long claimed to be able to install its software by sending a single text message. The exploit market pays top-dollar for “zero-click install” vulnerabilities in the latest versions of popular applications. It is not so remarkable that such capabilities exist, but it is remarkable that WhatsApp’s security team found and patched the vulnerability.

NSO Group is known to sell its software to governments such as Mexico and Saudi Arabia, where these capabilities have been used to spy on human rights activists, scientists, and journalists, including Jamal Khashoggi, who was allegedly tracked using NSO Group’s Pegasus spyware in the weeks leading up to his murder by agents of the Saudi government.

What can you do if you have antagonized a government known to use NSO Group’s spyware and your WhatsApp is getting strange calls and crashing? You can contact Eva Galperin at EFF’s Threat Lab at eva@eff.org.

As for everyone else, stay calm, update your software, and keep using chat apps like WhatsApp that offer end-to-end encryption. Advanced malware and vulnerabilities like this may grab headlines, but for most people most of the time end-to-end encryption is still one of the most effective ways to protect the contents of your messages.

California: Speak Out for the Right to Take Companies That Violate Your Privacy to Court

EFF - Thu, 05/16/2019 - 12:40pm

If a company disclosed information about your cable subscription without your permission, you already have the legal right to take them to court. Why should it be any different if a company ignores your requests about how to treat some of your most private information—where you go, where you live, or who you are?

94 percent of Californians agree they should be able to take companies that violate their privacy to court. S.B. 561, authored by Sen. Hannah-Beth Jackson and sponsored by the Attorney General, would allow individuals to stand up to the big companies that abuse their information and invade their privacy.

Take Action

California: Tell The Senate to Empower You To Stand Up For Your Privacy

This bill is the only one in the California legislature today to strengthen enforcement of the California Consumer Privacy Act (CCPA), an important privacy law passed last year and slated to go into effect in January.

The CCPA established important rights, but lacks the bite it needs to back up its bark. Empowering consumers to be able to sue companies directly, also known as a private right of action, is one of EFF’s highest priorities in any data privacy legislation.

A private right of action means that every person can act as their own privacy enforcer. Many privacy statutes allow people to sue companies directly, including federal laws on wiretaps, stored electronic communications, video rentals, driver’s licenses, and, yes, cable subscriptions.

The CCPA gives Californians a limited right to do this—just in cases of data breach. But failing to protect against data breaches is not the only way that companies violate our privacy and abuse our trust.

S.B. 561 would give consumers this powerful tool in all cases where companies violate their CCPA rights. If passed, this law would allow consumers to take companies to court if:

  • companies sell their data after being told not to
  • companies do not delete information after being asked to
  • companies refuse to comply with data portability requests
  • companies discriminate against them for exercising their privacy rights
  • companies sell the information of those younger than 13 without first obtaining explicit permission to do so

Private enforcement is a necessary right for consumers to have as a check on the behavior of giant companies that vacuum up our personal information and ignore our wishes.

Private enforcement is a necessary right for consumers to have as a check on the behavior of giant companies that vacuum up our personal information and ignore our wishes.

Government agencies alone cannot sufficiently protect individual privacy. Agencies may fail to enforce privacy laws for any number of reasons, including competing priorities, regulatory capture, or, as is the case in California, a lack of resources.

Stacey Schesser, Supervising Deputy Attorney General on Consumer Protection, said in an April hearing that her office—even after an expansion—would only be able to prosecute three cases a year to protect the rights of 40 million Californians.

“The reason that the PRA [private right of action] is so important here is because it provides a critical adjunct to the work of the Attorney General. It would work in parallel in ensuring that the law is enforced,” Schesser said. “To provide those rights and then say that you can’t enforce them if companies don’t comply, that’s about fundamental fairness as well.”

It is not enough for government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary for these laws to have bite, to ensure companies do not ignore them.

Tell your state Senator to publicly support S.B. 561 and empower you to enforce your own privacy rights.

The Christchurch Call: The Good, the Not-So-Good, and the Ugly

EFF - Thu, 05/16/2019 - 9:12am

In the wake of the mass shootings at two mosques in Christchurch, New Zealand, that killed fifty-one people and injured more than forty others, the New Zealand government has released a plan to combat terrorist and violent content online, dubbed the Christchurch Call. The Call has been endorsed by more than a dozen countries, as well as eight major tech companies.

The massacre, committed on March 15 by a New Zealand national connected with white supremacist groups in various countries, was intentionally live-streamed and disseminated widely on social media. Although most companies acted quickly to remove the video, many New Zealanders—and others around the world—saw it by accident on their feeds.

Just ahead of the Call's release, the New Zealand government hosted a civil society meeting in Paris. The meeting included not only digital rights and civil liberties organizations (including EFF), but also those working on countering violent extremism (CVE) and against white supremacy. In the days prior to the meeting, members of civil society from dozens of countries worked together to create a document outlining recommendations, concerns, and points for discussion for the meeting (see PDF at bottom).

As is too often the case, civil society was invited late to the conversation, which rather unfortunately took place during Ramadan. That said, New Zealand Prime Minister Jacinda Ardern attended the meeting personally and engaged directly with civil society members for several hours to understand our concerns about the Call—a rather unprecedented move, in our experience.

The concerns raised by civil society were as diverse as the groups represented, but there was general agreement that content takedowns are not the answer to the problem at hand, and that governments should be focusing on the root causes of extremism. PM Ardern specifically acknowledged that in times of crisis, governments want to act immediately and look to existing levers—which, as we’ve noted many times over the years, are often censorship and surveillance.

We appreciate that recognition. Unfortunately, however, the Christchurch Call released the following day is a mixed bag that contains important ideas but also endorses those same levers.

The good:

  • The first point of the Christchurch Call, addressing government commitments, is a refreshing departure from the usual. It calls on governments to commit to “strengthening the resilience and inclusiveness of our societies” through education, media literacy, and fighting inequality.
  • We were also happy to see a call for companies to provide greater transparency regarding their community standards or terms of service. Specifically, companies are called upon to outline and publish the consequences of sharing terrorist and violent extremist content; describe policies for detecting and removing such content and; provide an efficient complaints and appeals process. This ask is consistent with the Santa Clara Principles and a vital part of protecting rights in the context of content moderation.

The not-so-good:

  • The Call asks governments to “consider appropriate action” to prevent the use of online services to disseminate terrorist content through loosely defined practices such as “capacity-building activities” aimed at small online service providers, the development of “industry standards or voluntary frameworks,” and “regulatory or policy measures consistent with a free, open and secure internet and international human rights law.” While we’re glad to see the inclusion of human rights law and concern for keeping the internet free, open and secure, industry standards and voluntary frameworks—such as the existing hash database utilized by several major companies—have all too often resulted in opaque measures that undermine freedom of expression.
  • While the government of New Zealand acknowledged to civil society that their efforts are aimed at social media platforms, we’re dismayed that the Call itself doesn’t distinguish between such platforms and core internet infrastructure such as internet service providers (ISPs) and content delivery networks (CDNs). Given that, in the wake of attacks, New Zealand’s ISPs acted extrajudicially to block access to sites like 8Chan, this is clearly a relevant concern.

The ugly:

  • The Call asks companies to take “transparent, specific measures” to prevent the upload of terrorist and violent extremist content and prevent its dissemination “in a manner consistent with human rights and fundamental freedoms.” But as numerous civil society organizations pointed out in the May 14 meeting, upload filters are inherently inconsistent with fundamental freedoms. Moreover, driving content underground may do little to prevent attacks and can even impede efforts to do so by making the perpetrators more difficult to identify.
  • We also have grave concerns about how “terrorism” and “violent extremism” are defined, by whom.  Companies regularly use blunt measures to determine what constitutes terrorism, while a variety of governments—including Call signatories Jordan and Spain—have used anti-terror measures to silence speech.

New Zealand has expressed interest in continuing the dialogue with civil society, and has acknowledged that many rights organizations lack the resources to engage at the same level as industry groups. So here's our call: New Zealand must take its new role as a leader in this space seriously and ensure that civil society has a early seat at the table in all future platform censorship conversations. Online or offline, “Nothing about us without us.” 






Send a Message to Congress: The Last Thing We Need is More Bad Patents

EFF - Wed, 05/15/2019 - 2:25pm

Two Senators are working on a bill that will make it much easier to get, and threaten lawsuits over, worthless patents. That will make small businesses even more vulnerable to patent trolls, and raise prices for consumers. We need to speak up now and tell Congress this is the wrong direction for the U.S. patent system.

TAKE ACTION

Tell Congress we don't need more bad patents

There’s no published bill yet, but Senators Thom Tillis (R-N.C.) and Chris Coons (D-Del.) have published a “framework” outlining how they intend to undermine Section 101 of the U.S. patent law. That’s the section of law that forbids patents on abstract ideas, laws of nature, and things that occur in nature.

Section 101’s longstanding requirement should be uncontroversial—applicants have to claim a “new and useful” invention to get a patent—a requirement that, remarkably, Tillis and Coons say they will dismantle.

In recent years, important Supreme Court rulings like Alice v. CLS Bank have ensured that courts give full effect to Section 101. That’s given small businesses a fighting chance against meritless patents, since they can be evaluated—and thrown out—at early stages of a lawsuit.

Check out the businesses we profile in our “Saved by Alice” page. Patent trolls sued a bike shop over message notifications; a photographer for running online contests; and a startup that put restaurant menus online. It’s ridiculous that patents were granted on such basic practices—and it would be even more outrageous if those businesses had to hire experts, undergo expensive discovery, and endure a jury trial before they get a serious evaluation of such “inventions.”

Listen to our interview with Justus Decher. Decher’s health company was threatened by a company called MyHealth over a patent on “remote patient monitoring.” MyHealth never built a product, but they demanded $25,000 from Decher—even before his business had any profits.

Why is the Tillis-Coons proposal moving forward? Pharmaceutical and biotech companies are working together with lobbyists for patent lawyers and companies that have aggressive licensing practices. They’re pushing a false narrative about the need to resolve “uncertainty” in the patent law. But the only uncertainty produced by a strong Section 101 is in the profit margins of patent trolls and the lawyers filing their meritless lawsuits.

Tell Congress—don’t feed the patent trolls. Say no to the Tillis-Coons patent proposal. 

TAKE ACTION

TELL CONGRESS WE DON'T NEED MORE BAD PATENTS

Why Has San Francisco Allowed Comcast and AT&T to Dictate Its Broadband Future (Or Lack Thereof)?

EFF - Wed, 05/15/2019 - 1:27pm

American cities across the country face the same problem: major private Internet providers, facing little in the way of competition, refusing to invest and upgrade their networks to all residents. But not every city has gone through the trouble to analyze the problem, come up with a solution, and still done nothing like San Francisco.

For over a year, the city has sat on a fully vetted and ready to implement strategy to bring affordable high-speed broadband competition to all of its residents. According to the city’s own analysis, private providers will never address some of the most serious problems with the community’s infrastructure—such as the fact that 100,000 residents lack access to broadband, and 50,000 residents only have access to dial up speeds. Of the city’s public school students, 15 percent do not have access to the Internet, with that number rising to 30 percent for communities of color.

No Evidence That Private Competition Will Come to the Entire City

Within the city’s 195 page report on its options, the most important fact reported by the local government was that major private providers such as Comcast and AT&T (both strongly opposed the city’s effort) had absolutely no plans to compete with each other anywhere in the city. While certain parts of San Francisco’s market enjoy competition—usually driven by $40-gigabit fiber deployed by Sonic, a smaller regional ISP—many parts of the city do not.

The remedy to this problem is within reach: connecting every home and business with open access fiber. When the city invests in this infrastructure, it enables more private companies to compete in the market for Internet service. The model is proven to be successful internationally and even smaller, less well-financed communities in the United States such as Layton, Utah, are deploying open access fiber that delivers gigabit fiber services (with multiple ISPs competing) at an average of $50 a month. The city even has three qualified bidders ready to build the infrastructure if the city gives them the green light, and most importantly, the financial investment.

The $1.8 billion Universal High-Speed Fiber Network Will Benefit Residents for Decades

While the price tag may seem high, this is an infrastructure investment that will be used for generations, and will be able to affordably scale upwards in capacity as technology improves. In other words, the fiber won’t need replacing, only the electronics on either end. That's much less expensive than upgrades that would require digging up streets. Many private ISPs have to answer to investors who expect a shorter turn around in profits, which is why projects like Verizon FiOS have been shut down for years. There is no feasible way to build a fiber infrastructure and expect a quick turnaround in profit. Rather, the right way to look at what is essentially 21st-century broadband infrastructure is to think long term over a multi-decade window.

If you live in San Francisco and are paying a lot already for broadband service (if you can get it at all), look at how much you are paying over just one year, and then over multiple years. When the total of paying for an uncompetitive service starts adding up to several thousands of dollars, then the city’s estimated $2000 per household for the entirety of the project starts to not look so bad. In fact, homes have an expected three percent increase in home value when connected to fiber, which averages out to be $5437 per household. Furthermore, it would not be any less expensive for a private provider to undertake this fiber project. And as noted by the city's analysis, private sector investment can't be relied on, because the private sector has made clear that, if they are focused on short-term returns, it does not intend to invest that kind of money.

Therefore, we must continue to push our local leaders to be bold, forward-thinking, and to stand up to the private incumbents, who are more than happy with the status quo. Otherwise, most residents within the city of San Francisco will have to accept the fact that they will miss out on broadband access, or be forced into subscribing to a high-speed broadband monopoly.

San Francisco Takes a Historic Step Forward in the Fight for Privacy

EFF - Tue, 05/14/2019 - 8:06pm

The San Francisco Board of Supervisors voted today by 8-to-1 to make San Francisco the first major city in the United States to ban government use of face surveillance technology. This historic measure applies to all city departments. The Stop Secret Surveillance Ordinance also takes an important step toward ensuring a more informed and democratic process before the San Francisco Police Department and other city agencies may acquire other kinds of surveillance technologies.

Face recognition technology is a particularly pernicious form of surveillance, given its disparate propensity to misidentify women, and people of color. However, even if those failures were addressed, we are at a precipice where this technology could soon be used to track people in real-time. This would place entire communities of law-abiding residents into a perpetual line-up, as they attend worship, spend time with romantic partners, attend protests, or simply go about their daily lives.

It is encouraging to see San Francisco take this proactive step in anticipating the surveillance problems on the horizon and heading them off in advance.

It is encouraging to see San Francisco take this proactive step in anticipating the surveillance problems on the horizon and heading them off in advance. This is far easier than trying to put the proverbial genie back in the bottle after it causes harm.

Today’s 8-1 vote appears veto-proof, especially because two sponsors of the ordinance were not in attendance. However, the fight for the privacy and civil rights of the people of San Francisco is not over. EFF will continue to work with our members, coalition partners, lawmakers, and neighbors, to urge Mayor Breed to sign into law the Stop Secret Surveillance Ordinance. Please join us in this fight by contacting Mayor Breed and expressing your support for the Stop Secret Surveillance Ordinance.

Victory! EFF Wins National Security Letter Transparency Lawsuit

EFF - Tue, 05/14/2019 - 6:44pm

A federal district court in San Francisco has ruled strongly in favor of our Freedom of Information Act lawsuit seeking records of how and when the FBI lifts gag orders issued with National Security Letters (NSLs). These records will provide a window into the FBI’s use of a highly secretive investigative tool that has been historically misused. They will also provide insight into the effectiveness of the USA Freedom Act, the national security reform law passed by Congress in 2015.

NSLs are a form of administrative subpoena that allows the government to obtain basic information about customers of communications providers, banks and credit agencies, and a range of other companies. The defining feature of NSLs, however, is that the FBI can issue a blanket gag order with its information request, preventing recipients from saying anything about them, including the very fact that they have received an NSL.

The FBI has issued over 500,000 NSLs since 2001, the vast majority of which contained such indefinite gag orders. As a result, the public has had little insight into the scope of the government’s use of NSLs, aside from internal reports that paint a picture of overreach and misuse. EFF and others have long argued that NSL gag orders violate the First Amendment, and we succeeded in having the statute ruled unconstitutional in 2013. 

In response to these constitutional challenges, Congress included a series of reforms as part of USA Freedom which allowed the FBI to continue issuing NSLs. One provision of the law required the FBI to develop “termination procedures” to lift NSL gag orders if they were no longer necessary to protect national security or other important government interests.

If these procedures worked as Congress intended, the government should be regularly notifying companies that they could speak about the NSLs they received and publish the letters themselves. We saw this happen in a small number of cases, but these represented only a tiny fraction of the more than 500,000 letters issued since the passage of the Patriot Act in 2001. Meanwhile, we’ve also seen—in our own case and in others—that even after USA Freedom the FBI believes it can issue indefinite and even permanent gag orders.

That led us to wonder just how the FBI was implementing its termination procedures and how effective they had been, so we filed a FOIA request in late 2016. After the FBI said it had no records at all, we filed suit.

Although the FBI did eventually produce some documents, it adopted a staggering secrecy argument. It claimed that the very letters it sends to NSL recipients to tell them it is terminating a gag order—documents that by their very nature were created to be public—could allow criminals and terrorists to learn sensitive information about the FBI’s investigative techniques.

Fortunately, Judge Vince Chhabria of the Northern District of California ruled that the documents must be disclosed. He called the FBI's argument “dubious,” noting that every single one of the termination letters at issue could already be public. “[A]ny company whose nondisclosure requirement has been lifted can, by definition, disclose its receipt of the associated national security letter,” Chharbia wrote.“This is the natural result of Congress’s judgment that the use of national security letters should not be concealed from the public once concealment is no longer necessary to protect an investigation or national security, which further undermines the government’s assertion that aggregate disclosure of this information would fall within Congress’s definition of ‘disclos[ing] techniques and procedures for law enforcement investigations.’” 

In other words, the FBI simply had no basis to claim secrecy in records that Congress and the Bureau itself had already determined could be made public. The court ordered the parties to agree on a schedule for releasing the NSL termination letters in the next two weeks. We’re looking forward to analyzing the documents soon. 

Related Cases: National Security Letters (NSLs)

YouTube User Fights Unfair Takedown Campaign from UFC

EFF - Tue, 05/14/2019 - 4:13pm
EFF Demands End to Bogus Copyright Claims Aimed at Post-Fight Commentary

San Francisco – The creator of popular post-fight commentary videos on YouTube is demanding an end to the Ultimate Fighting Championship (UFC)’s unfair practice of sending takedown notices based on bogus copyright claims. The creator, John MacKay, is represented by the Electronic Frontier Foundation (EFF).

MacKay operates the “Boxing Now” channel on YouTube, and his videos include original audio commentary and small number of still images from UFC events. While those stills are an obvious fair use—a lawful way to use copyrighted content with permission—UFC has sent five takedown notices to YouTube claiming infringement, and YouTube has complied with each takedown. MacKay has responded every time with a counter-notice, explaining the fair and non-infringing nature of his videos, and YouTube has reposted the videos after UFC failed to respond.

“My YouTube channel is a popular source of post-fight commentary,” said MacKay. “My videos are most often viewed in the days immediately after a fight, and when UFC has them taken down for a few days with these unfair copyright claims, I lose a lot of viewers and a significant amount of money.”

UFC also produces its own YouTube videos with post-fight commentary. In a letter sent to the chief legal officer of UFC today, EFF points out that convincing YouTube to remove MacKay’s videos may benefit UFC unfairly by reducing competition for post-fight commentary videos.

“Is UFC afraid of a fair fight?” asked EFF Staff Attorney Alex Moss. “It’s time for UFC to stop sending improper takedown notices for Mr. MacKay’s videos. Independent video creators have a right to fair use of copyrighted works.”

For the full letter sent to UFC:
https://www.eff.org/document/letter-ufc

Contact:  AlexMossMark Cuban Chair to Eliminate Stupid Patents and Staff Attorneyalex@eff.org

Courts to Government Officials: Stop Censoring on Social Media.

EFF - Mon, 05/13/2019 - 6:40pm

The Internet, and social media in particular, is uniquely designed to promote free expression, so much so that the Supreme Court has recognized social media as the “most important places” for speech and sharing viewpoints. Like most of us. government agencies and officials have created social media profiles and use them to connect directly with people at a scale previously unheard of. But some public officials, by silencing critics, are using these pages as a tool of censorship rather than a tool of governance.

Thankfully, courts are stepping in to make sure that long-established protections for speech in physical spaces apply to speech on the Internet. In the most notorious case, the district court in New York found that when President Trump blocks people on Twitter, he violates the First Amendment because he is discriminating against certain viewpoints (mostly critics) and preventing them from participating in debate on his Twitter page. The case has been appealed, and in the time since two federal Courts of Appeals have ruled in separate cases that viewpoint discrimination on government social media pages is illegal.

In January, the Fourth Circuit, became the first federal appellate court to decide that government officials cannot pick and choose what views can appear on government social media pages. That court found in Davison v. Randall that a county official created a public forum, a legal category defined by the First Amendment, when she made an official Facebook page for her office, and that she engaged in unlawful discrimination in that forum when she deleted the comment of a local critic. The First Amendment sharply limits content discrimination and essentially bars viewpoint discrimination in public forums.

Most recently, the Fifth Circuit has joined the growing body of federal courts finding that government officials cannot delete comments or block people that they disagree with on social media. In Robinson v. Hunt County, Texas the Fifth Circuit ruled that the Sheriff engaged in unconstitutional viewpoint discrimination when he (or someone in the sheriff’s office) deleted a Facebook comment that he objected to.

In response to the appearance of several comments expressing anti-police sentiment on its Facebook page, the Hunt County Sheriff’s Office posted that they would delete comments that they found inappropriate. The Plaintiff, Ms. Robinson, then commented on that post that “degrading or insulting police officers is not illegal . . . just because you consider a comment to be ‘inappropriate’ doesn’t give you the legal right to delete it and/or ban a private citizen from commenting on this TAX PAYER funded social media site.” Ms. Robinson’s comment was promptly deleted.

The Fifth Circuit found that deleting this comment was viewpoint discrimination, unlawful in any government created forum for private speech. However, the court declined to reach the question of exactly what type of forum, the Hunt County Sheriff Office created.

Robinson also raised a unique question about whether this type of censorship is considered official government policy or the rogue action of a government official. The court found that the Sheriff makes policy for everything related to his elected office, including the Sheriff’s Office Facebook page. Further, the Sheriff’s Office posted a notice that they would delete comments, it became pretty clear that whoever deleted Ms. Robinson’s comment did it according to the official policy.

EFF filed an amicus brief in the case, urging the court to look at a government’s actions rather than just statements when determining if the page is a forum. (The page had included the Sheriff’s statement disclaiming that the page was a public forum).

This ruling now sets precedent for our own case where we are representing PETA in suing Texas A&M University for using Facebook tools to explicitly censor PETA and PETA campaigns from the university Facebook page.

The trend has become clear, when a government official or office creates a page on social media, they’re tasked with upholding and not infringing on the public’s free speech rights.

Don’t Let California’s Legislature Extend Broadband Monopolies for Comcast and AT&T

EFF - Mon, 05/13/2019 - 4:51pm

Californians have successfully pushed the state's legislature to restore two-thirds of the 2015 Open Internet Order through state laws. Stopping legislation from Assemblymember Lorena Gonzalez—backed by AT&T and Comcast (A.B. 1366)—is the final piece to bringing back those critical protections to promote broadband choice.  

The California Assembly will soon take up this bill, which would renew a 2011 ISP-backed law that expires this year. That law had the stated goal of promoting choice and competition for Voice over Internet Protocol (VoIP) services. In practice, however, it has sidelined state and local governments from exerting authority over broadband and led to fewer choices for consumers at higher prices.

Today, most Californians face a monopoly for high-speed broadband access. If we let the 2011 law expire, California could instead choose to empower state and local governments to create policies to eliminate local monopolies. Doing so now is even more crucial than it was in 2011, as the FCC no longer oversees the broadband industry.

Take Action

Tell Your Legislators Not to Extend Broadband Monopolies

Proponents of the bill hope to mislead legislators by saying it only seeks to protect applications like WhatsApp and Skype from regulation. But the companies that make those products are not supporting this bill. Who is? Companies including Comcast, AT&T, and Verizon, which have spent close to $1 million lobbying on it in 2019.

Broadband competition does not happen in a vacuum. That is why competitors to the big ISPs in California have written in opposition to AB 1366. It's also why dozens of ISPs pleaded with the FCC not to repeal the 2015 Open Internet Order because its policies helped promote competitive entry. To compete, they all need to have the same access that Comcast and AT&T have to the poles and underground conduit to connect the wires that give you broadband. But the FCC no longer gives them equal federal rights, leaving it to states to step up to fill the void.

California state law only grants special infrastructure rights to cable companies and telephone companies. That means Californians have very little access to wholesale open access fiber companies, or any broadband provider that is not a cable TV or telephone company. That differs significantly from places such as Utah, where people have eleven choices for gigabit fiber at around $50 a month. This barrier is so massive that even Google couldn’t overcome it when AT&T blocked their entry in Austin, Texas by denying access to infrastructure that AT&T owned. Notably, that blocking prompted local government intervention, something that is likely not allowed in California until this law expires.

Passing A.B. 1366 would effectively maintain our current dismal status quo. Changing the 2011 law, or just letting it expire, would allow the state and local governments, which were recently empowered to build their own broadband infrastructure, to begin the hard work of giving people choices. For example, EFF has called for policymakers to promote a gigabit fiber for all plan. If California relinquishes its authority over broadband, there is zero chance we'll see statewide competition any time soon.

Without strong public opposition to A.B. 1366, this bill will likely pass. That would deny Californians their broadband choices, unless the federal government intervenes. (We wouldn't recommend holding your breath for that.) California’s Assembly Communications and Conveyance Committee, the same committee that voted to kill the state’s net neutrality law before reversing itself after public outcry, has already uncritically voted to advance this law. You can watch the hearing here, starting at minute 59.

A.B. 1366 Stems from a Long History of Industry Efforts to Avoid Competition

A.B. 1366 is just the latest bill to serve a decades-long American broadband industry agenda aimed to prevent oversight, as well as consolidate, raise prices, and provide inferior service as compared to our international peers. It's connected with the same set of actors that that prompted the FCC to repeal the 2015 Open Internet Order, which stood for competition, privacy, and net neutrality.

How did this happen in California?

It starts at the organization known as the American Legislative Exchange Council (ALEC), which has a long history of drafting bills that protect broadband monopolies, including state laws that ban local governments from providing high-speed fiber to their residents. It is also on record supporting the FCC’s effort to repeal net neutrality and was highly critical of California's net neutrality law, which the state passed in defiance of the big ISP agenda.

ALEC in 2007 produced model legislation to encourage state public utility commissions to abandon their authority over VoIP services. In 2011, then-senator Alex Padilla took up a version of the ALEC law, SB 1161, to expand that prohibition to broadband. This applied to all state and local government entities, except for cities (those were restrained under other state laws until last year). It also cemented regulations that benefit telephone and cable television companies, but not their competitors. Big ISPs loved that idea, and AT&T, Comcast, Verizon, CTIA, and the California Cable and Telecommunications Association spent a combined total of $4.89 million on lobbying during the original passage of SB 1161.

At the time, Padilla justified this drastic step because the FCC regulated these companies—something that, as of 2017, is no longer true thanks to the efforts of A.B. 1366 supporters Comcast, AT&T, and Verizon. ALEC recently re-endorsed the continuation of its model legislation from 2007 for states and while several (but not all) of the major ISPs no longer financially support ALECs work—a split prompted by a 2018 controversy that led Verizon to issue statements condemning racism, white supremacy, and sexism that stemmed from ALEC activities—that certainly doesn’t mean that the big ISPs have stopped pushing for these laws.

California’s Current Law Stifles Broadband Competition and Hurts Vulnerable People

EFF has detailed to great extent how the broadband market has changed from when this bill was first passed and why competition policy is desperately needed in high-speed access. FCC data makes it clear that rural Californians and low-income Californians are the ones most often left behind. The law is so bad that it also affects other issues, such as the need to regulate the prison industry’s exploitive communications practices for inmates and their families.

Ultimately, A.B. 1366 stands in the way of giving consumers good broadband choices. Across the world, every country with broadband that outperforms the United States got there because their governments chose not to deregulate the industry. Instead, those governments promoted access to rights of way infrastructure, prohibited private companies from bottlenecking or crowding out competitors, and set goals and metrics to assess progress. They've accomplished this by deploying policy that is enforced by law rather than relying on monopolists to act benevolently.

If you are tired of having no options in broadband, please write to your legislator soon to stop this bill. We need to let this ISP-drafted law die because all it does is protect broadband monopolies. We need to demand that California promotes competition in broadband, and rejecting this bill is the first step. Tell your representatives to vote no on A.B. 1366.

Why Outlawing Cryptocurrency Purchases is a Terrible Idea

EFF - Mon, 05/13/2019 - 3:59pm

A member of the U.S. House of Representatives last week called for a bill outlawing Americans from making cryptocurrency purchases, aligning with anti-cryptocurrency policies in countries such as Iran and Egypt. There is no language for this potential bill or any explanation of whether such a bill would ban Americans from buying cryptocurrencies, using cryptocurrencies to make other purchases, or both. Nonetheless, it’s a good moment to remind everyone why a bill outlawing cryptocurrencies is a terrible idea.  

Attempts to ban cryptocurrencies are often rooted in fundamental misunderstandings. One common refrain is that criminals use cryptocurrencies to facilitate illegal activity, and thus we should ban cryptocurrencies to hamper that illegal activity. This is wrong for several reasons: first, it ignores the many entirely legal uses for cryptocurrencies that already exist and that will continue to develop in the future. Cryptocurrencies have been used for a decade to store and transfer value with near-zero transaction costs and no need for intermediaries like banks. As more applications make holding and exchanging cryptocurrencies easier, everyday consumers are using cryptocurrency regularly for innocuous activities like buying furniture on Overstock.com and sending money to family members overseas. And innovation related to cryptocurrency technology is giving rise to more use cases: for example, some cryptocurrencies enable programmers to write computer programs (so-called “smart contracts”) that automatically transfer cryptocurrency to others upon certain conditions being met. These are just a few examples of the many potential uses of this technology that a ban on cryptocurrency would undermine.

The fact that a technology could be used to violate the law does not mean we should ban it. Notably, criminals have long used cash—which, like some cryptocurrencies, allows for greater anonymity—to aid in committing crimes. But we don’t call for a ban on cash as a result, and we don’t blame Ford when one of its cars is used as a getaway vehicle in a bank robbery. Nor would such a law likely stop criminals from using cryptocurrency, since criminals are, by definition, more willing to violate existing laws. Ultimately, banning cryptocurrencies would rob Americans of opportunities to access potentially significant technologies, and have no real impact on criminals abusing these tools.

We’ve seen this line of reasoning before. Critics of end-to-end encryption and Tor have claimed criminals can hide behind the privacy-protective functions of such technologies. But the increased anonymity and privacy-enhancing features of some cryptocurrencies are part of what make the technology so potentially important. To date, many cryptocurrencies are not terribly private; transactions are recorded on permanent public ledgers, and while users are identified with pseudonymous public keys, this is far less privacy-protective than using cash. There are promising new approaches to developing more private cryptocurrencies. However, none of those tools has yet reached the widespread popularity of Bitcoin or Ethereum. As the rise of privacy-protective cryptocurrencies begins to gain traction, it’s important we don’t let regulatory backlash prevent these tools from reaching people. 

Our financial transactions paint an intimate portrait of our lives, often exposing our religious beliefs, family status, medical history, and many other facets of our lives we might prefer to keep private. Yet American laws do not adequately secure financial privacy. Rather, there is a patchwork of limited protections. These include the Gramm-Leach-Bliley Act (which requires banks to notify you about their information-sharing policies and give you some opportunity to opt out); the stronger California Financial Information Privacy Act (if you happen to be Californian); and the Fair Credit Reporting Act (which is primarily focused on credit and not financial transactions). That’s why new tools that people can use to protect the privacy of their own financial transactions are so appealing: they offer a technological solution to protecting consumer privacy when the legal rights aren’t as robust or enforceable as we would like. A bill banning cryptocurrency would cut off this pro-privacy innovation, chilling the development of new technologies that might protect financial privacy before they have any chance of being developed and widely adopted.

Cryptocurrency innovation also holds the promise of righting other power imbalances. There are millions of people living in the United States who cannot obtain a bank account because they don’t have appropriate government-issued identification, don’t have a permanent physical address, fear exposing their home address for safety reasons, or have a history of unpaid bank fees. The Fair Credit Reporting Act requires banks to tell consumers why they are denied an account, but doesn’t guarantee them a bank account. Cryptocurrency may eventually help many of these unbanked individuals get access to financial services.

Cryptocurrency is also naturally more censorship-resistant than many other forms of financial instruments currently available. At EFF, we’ve tracked numerous websites engaged in legal speech that faced financial censorship, often with little recourse. These include a social network for kinky people, a nonprofit supporting LGBTQ fiction, an online bookseller, and most famously the whistleblower website Wikileaks. Cryptocurrencies provide a powerful market alternative to the existing financial behemoths that exercise control over much of our online transactions today, so that websites engaged in legal-but-controversial speech have a way to receive funds when existing financial institutions refuse to serve them.

These ideas are not new. Censorship-resistance and privacy are attributes of cash, which people have enjoyed for thousands of years. Cryptocurrencies offer a pathway for bringing those attributes into our online world.

This is not to imply that cryptocurrency has achieved all of these goals. In fact, many watching the space have expressed frustration and skepticism about whether cryptocurrency can ever execute on some of the hopes of early adopters. But even the staunchest cryptocurrency critic should be concerned about how misguided regulation in this space might constrict the legal rights of coders.

As with many new technologies, the cryptocurrency ecosystem faces serious challenges. These include fraudsters taking advantage of people’s ignorance about technology to scam them, problems around achieving true decentralization of control, strong security in the applications built on top of cryptocurrencies, and the opportunity for entities holding cryptocurrency keys to abuse that access. These problems likely require responses that are technological and community-driven. In some instances, laws may be appropriate. But thoughtful laws regulating cryptocurrencies must focus on those who abuse the technology for fraud, not everyday consumers.

Crafting such laws requires a lot of care. Any such law would need to provide a significant on-ramp for emerging technologies, clear protections for anyone engaged in non-custodial services that technically aren’t able to exchange or take cryptocurrency from users without their participation, and strong protections for people who are merely writing and publishing code. These laws must also be technologically neutral, to avoid enshrining one particular cryptocurrency into law in ways that have unexpected consequences for projects with similar functionality.

Finding the right balance for regulating cryptocurrency requires that lawmakers protect consumer rights and foster innovation. Banning Americans from cryptocurrency purchases is short-sighted and anti-consumer, and falls far short of that standard.

With thanks to Marta Belcher, who provided feedback on an early draft of this post. 

Social Media Councils: A Better Way Forward, Window Dressing, or Global Speech Police?

EFF - Fri, 05/10/2019 - 2:01pm

Social media platforms routinely make arbitrary and contradictory decisions about what speech to block or penalize. No one is happy with the status quo: not people who want more censorship, nor people who want less censorship, nor people who simply want platforms to make different choices so that already-marginalized groups won't bear the brunt of their censorship policies. So many are looking for a better way forward. EFF offered a few thoughts on this last week, but we've also been looking at another persistent and intriguing idea, spearheaded largely by our friends at Article 19: the creation of social media council (SMC) to review content moderation decisions. Ever since Facebook announced a plan to create its own version, there’s been a surge of interest in this approach. Can it work?

At root, the concept is relatively simple: we can’t trust the platforms to do moderation well, so maybe we need an independent council to advise them on ways to do it better, and call them out when they blow it. A council might also provide an independent appeal mechanism for content removal decisions.

There are many different models for these councils. An appeals court is one. Or we might look to the international arbitration structure that handles domain name disputes. Or European press councils which administer codes of practice for journalists, investigate complaints about editorial content, and defend press freedom. They are funded by the media themselves, but aim to be independent.

We’re all in favor of finding ways to build more due process into platform censorship. That said, we have a lot of questions.  Who determines council membership, and on what terms? What happens when members disagree? How can we ensure the council’s independence from the companies it’s intended to check? Who will pay the bills, keeping in mind that significant funding will be needed to ensure that it is not staffed only by the few organizations that can afford to participate? What standard will the councils follow to determine whether a given decision is appropriate? How do they decide which of the millions of decisions made get reviewed? How can they get the cultural fluency to understand the practices and vocabulary of every online community? Will their decisions be binding on the companies who participate, and if so, how will the decisions be enforced? A host of additional questions are raised in recent document from the Internet and Jurisdiction Policy Network.

But our biggest concern is that social media councils will end up either legitimating a profoundly broken system (while doing too little to fix it) or becoming a kind of global speech police, setting standards for what is and is not allowed online whether or not that content is legal. We are hard-pressed to decide which is worse.

But our biggest concern is that social media councils will end up either legitimating a profoundly broken system (while doing too little to fix it) or becoming a kind of global speech police, setting standards for what is and is not allowed online whether or not that content is legal. We are hard-pressed to decide which is worse.

To help avoid either outcome, here are some guideposts, taken in part from our work on Shadow Regulation and the Santa Clara Principles:

  • Independence: SMCs should not be subject to or influenced by the platforms they are meant to advise. As a practical matter, that means that they should not depend directly on such platforms for funding (though platforms might contribute to an independently administered trust), and the platforms should not select council members. In addition, councils must be shielded from government pressures.
  • Roles: SMCs should not be regulators, but advisors. For example, an SMC might be charged with interpreting whether a platform’s decisions accurately reflect the platform’s own policies, and whether those policies themselves conform to international human rights standards. SMCs should not seek to play a legal role, such as interpreting local laws. Any such role could lead to SMCs becoming a practical speech police, exacerbating an existing trend where extrajudicial decision-makers are effectively controlling huge swaths of online expression. In addition, SMC members may not have the training needed to interpret local laws (hopefully they won’t all be lawyers), or may interpret them in a biased way. To avoid these pitfalls, SMCs should focus instead on interpreting specific community standards and determining if the company is adhering to its own rules.
  • Subject matter: Different platforms should be able make different moderation choices—and users should be able to as well. So rather than being arbiters of “legitimate” speech, determining a global policy to which all platforms must adhere, SMCs should focus on whether an individual platform is being faithful to its own policies. SMCs can also review the policies to advise on whether the policies are sufficiently clear, transparent, and subject to non-discriminatory enforcement, and identify the areas where they are less protective of speech than applicable law or violate international human rights norms.
  • Jurisdiction: Some have suggested that SMCs should be international, reflecting the actual reach of many social media platforms, and should seek to develop and implement a coherent international standard. We agree with Article 19 that a national approach would be better because a national council will be better placed to review company practice in light of varying local norms and expectations. A regional approach might also work if the region's law and customs are sufficiently consistent.
  • Personnel: SMCs should be staffed with a combination of experts in local, national, and international laws and norms. To promote legitimacy, members should also represent diverse views and communities and ideally should be selected by the people who will be affected by their decisions. And there must be adequate funding to ensure that members who cannot afford to travel to meetings or donate their time are adequately compensated.
  • Transparency: The review process should be transparent and the SMC’s final opinions should be public, with appropriate anonymization to protect user privacy. In addition, SMCs should produce annual reports on their work, including data points such as how many cases they reviewed, the type of content targeted, and how they responded. New America has a good set of specific transparency suggestions (focusing on Facebook).

Facebook says it’s planning to launch its “Oversight Board” by the end of the year. We urge Facebook and others toying with the idea of social media councils to look to these guidelines. And we urge everyone else—platforms, civil society, governments, and users—to fight to ensure that efforts to promote due process in content moderation aren’t perverted to support the creation of an international, unaccountable, extrajudicial speech police force.

EFF Supports Unnamed Company in Bringing an End to Endless NSL Gag Orders

EFF - Fri, 05/10/2019 - 1:18pm

EFF and ACLU filed an amicus brief last week in a case that may finally force the Ninth Circuit Court of Appeals to resolve one of the most serious problems with National Security Letters: NSL gag orders that have no fixed end date. 

Similar to subpoenas, NSLs are information requests issued by the FBI and often sent to communication companies seeking customer data. But NSLs have a serious problem: they almost always include gag orders, making the entire process secret.

These secret government orders are ripe for abuse—especially when they don’t end. Under the First Amendment, the government cannot permanently silence speech in the name of national security. That’s because every threat to national security must end at some point, just as every secret must at some point shed the need for secrecy. But the NSL law allows just such indefinite gag orders. And in the last NSL case that came before the Ninth Circuit—brought by EFF on behalf of the providers CREDO and Cloudflare—the court dodged this issue

The new case involves an unnamed company that received three NSLs from the FBI in 2011 requiring the company to turn over information about its customers. As with nearly all of the hundreds of thousands of NSLs issued since 2001, these NSLs were accompanied by gag orders that prevented the company from saying anything about the NSLs, including the fact that it had received them, unless and until the FBI told it otherwise. In 2018, the company asked the FBI to have a court review the need for the gags, but the U.S. District Court for the Southern District of California upheld this “unless and until” standard. Now, the company has appealed that decision to the Ninth Circuit, arguing that indefinite gag orders violate the First Amendment.

EFF is supportive of the unnamed company’s arguments. In fact, we have been waiting for our opportunity to make a nearly identical case on behalf of our own clients. For years, EFF has represented the service providers CREDO and Cloudflare in their own challenges to the NSL law. Along the way, we had the statute declared unconstitutional, only to have it amended by Congress and then upheld in 2017 by the Ninth Circuit. In response to our arguments that indefinite gag orders were unconstitutional, the appeals court wrote that lower courts reviewing these “nondisclosure requirements” are constitutionally “bound to ensure that the nondisclosure requirement does not remain in place longer than is necessary to serve the government's compelling interest.”

Because we disagreed with the Ninth Circuit’s ruling upholding the NSL statute as constitutional, we asked the court to reconsider. Our client’s petition has been pending for more than eighteen months, and the delay is why we haven’t asked a district court to follow the Ninth Circuit’s command to set an end date to the gag orders so that they don’t remain in place “longer than is necessary to serve the government’s compelling interest.”

But as our new amicus brief makes clear, our clients and the new unnamed company aren’t the only ones in this situation. Hundreds or even thousands of NSL recipients are subject to unconstitutional indefinite gag orders, depriving the public of a large swath of information about how the government uses NSLs. (EFF also has an ongoing Freedom of Information Act case to learn some of this information.)

In this case, the Ninth Circuit has the opportunity to fix one of the problems with NSLs and its ruling in our case. Hopefully the court will take the first step toward ending endless gag orders.

Related Cases: National Security Letters (NSLs)In re: National Security Letter 2011 (11-2173)In re National Security Letter 2013 (13-80089)In re National Security Letter 2013 (13-1165)Barr v. Redacted - Under Seal NSL Challenge

Pages