Electronic Freedom Foundation

"FREE from Chains!": Eskinder Nega is Released from Jail

EFF - Fri, 02/16/2018 - 8:13pm

Eskinder Nega, one of Ethiopia's most prominent online writers, winner of the Golden Pen of Freedom in 2014, the International Press Institute's World Press Freedom Hero for 2017, and PEN International's 2012 Freedom to Write Award, has been finally set free.

Eskinder is greeted by well-wishers on his release. Picture by Befekadu Hailu

Eskinder has been detained in Ethiopian jails since September 2011. He was accused and convicted of violating the country's Anti-Terrorism Proclamation, primarily by virtue of his warnings in online articles that if Ethiopia's government continued on its authoritarian path, it might face an Arab Spring-like revolt.

Ethiopia's leaders refused to listen to Eskinder's message. Instead they decided the solution was to silence its messenger. Now, within the last few months, that refusal to engage with the challenges of democracy has led to the inevitable result. For two years, protests against the government have risen in frequency and size. A new Prime Minister, Hailemariam Desalegn, sought to reduce tensions by introducing reforms and releasing political prisoners like Eskinder. Despite thousands of prisoner releases, and the closure of one of the country's more notorious detention facilities, the protests continue. A day after Eskinder's release, Desalegn was forced to resign from his position. A day later, and the government has declared a new state of emergency.

Even as it comes face-to-face with the consequences of suppressing critics like Eskinder, the Ethiopian authorities pushed back against the truth. Eskinder's release was delayed for days, after prison officials repeatedly demanded that Eskinder sign a confession that falsely claimed he was a member of Ginbot 7, an opposition party that is banned as a terrorist organization within Ethiopia.

Eventually, following widespread international and domestic pressure, Eskinder was released without concession.

Eskinder, who was in jail for nearly seven years, joins a world whose politics and society have been transformed since his arrest. His predictions about the troubles Ethiopia would face if it silenced free expression may have come true, but his views were not perfect. He was, and will be again, an online writer, not a prophet. The promise of the Arab Spring that he identified has descended into its own authoritarian crackdowns. The technological tools he used to bypass Ethiopia's censorship and speak to a wider public are now just as often used by dictators to silence them. But that means we need more speakers like Eskinder, not fewer. And those speakers should be carefully listened to, not forced into imprisonment and exile.

New National Academy of Sciences Report on Encryption Asks the Wrong Questions

EFF - Fri, 02/16/2018 - 4:04pm

The National Academy of Sciences (NAS) released a much-anticipated report yesterday that attempts to influence the encryption debate by proposing a “framework for decisionmakers.” At best, the report is unhelpful. At worst, its framing makes the task of defending encryption harder.

The report collapses the question of whether the government should mandate “exceptional access” to the contents of encrypted communications with how the government could accomplish this mandate. We wish the report gave as much weight to the benefits of encryption and risks that exceptional access poses to everyone’s civil liberties as it does to the needs—real and professed—of law enforcement and the intelligence community.

From its outset two years ago, the NAS encryption study was not intended to reach any conclusions about the wisdom of exceptional access, but instead to “provide an authoritative analysis of options and trade-offs.” This would seem to be a fitting task for the National Academy of Sciences, which is a non-profit, non-governmental organization, chartered by Congress to provide “objective, science-based advice on critical issues affecting the nation.” The committee that authored the report included well-respected cryptographers and technologists, lawyers, members of law enforcement, and representatives from the tech industry. It also held two public meetings and solicited input from a range of outside stakeholders, EFF among them.

EFF’s Seth Schoen and Andrew Crocker presented at the committee’s meeting at Stanford University in January 2017. We described what we saw as “three truths” about the encryption debate: First, there is no substitute for “strong” encryption, i.e. encryption without any intentionally included method for any party (other than the intended recipient/device holder) to access plaintext to allow decryption on demand by the government. Second, an exceptional access mandate will help law enforcement and intelligence investigations in certain cases. Third, “strong” encryption cannot be successfully fully outlawed, given its proliferation, the fact that a large proportion of encryption systems are open-source, and the fact that U.S. law has limited reach on the global stage. We wish the report had made a concerted attempt to grapple with that first truth, instead of confining its analysis to the second and third.

We recognize that the NAS report was undertaken in good faith, but the trouble with the final product is twofold.

First, its framing is hopelessly slanted. Not only does the report studiously avoid taking a position on whether compromising encryption is a good idea, its “options and tradeoffs” are all centered around the stated government need of “ensuring access to plaintext.” To that end, the report examines four possible options: (1) taking no legislative action, (2) providing additional support for government hacking and other workarounds, (3) a legislative mandate that providers provide government access to plaintext, and (4) mandating a particular technical method for providing access to plaintext.

But all of these options, including “no legislative action,” treat government agencies’ stated need to access to plaintext as the only goal worth study, with everything else as a tradeoff. For example, from EFF’s perspective, the adoption of encryption by default is one of the most positive developments in technology policy in recent years because it permits regular people to keep their data confidential from eavesdroppers, thieves, abusers, criminals, and repressive regimes around the world. By contrast, because of its framing, the report discusses these developments purely in terms of criminals “who may unknowingly benefit from default settings” and thereby evade law enforcement.

By approaching the question only as one of how to deliver plaintext to law enforcement, rather than approaching the debate more holistically, the NAS does us a disservice. The question of whether encryption should or shouldn’t be compromised for “exceptional access” should not be treated as one of several in the encryption debate: it is the question.

Second, although it attempts to recognize the downsides of exceptional access, the report’s discussion of the possible risks to civil liberties is notably brief. In the span of only three pages (out of nearly a hundred), it acknowledges the importance of encryption to supporting values such as privacy and free expression. Unlike the interests of law enforcement, which are represented in every section, the report discusses the risks to civil liberties posed by exceptional access as just one more tradeoff, and addresses them as a stand-alone concern.

To emphasize the report’s focus, the civil liberties section ends with the observation that criminals and terrorists use encryption to “take actions that negatively impact the security of law-abiding individuals.” This ignores the possibility that encryption can both enhance civil liberties and preserve individual safety. That’s why, for example, experts on domestic violence argue that smartphone encryption protects victims from their abusers, and that law enforcement should not seek to compromise smartphone encryption in order to prosecute these crimes.

Furthermore, the simple act of mandating that providers break encryption in their products is itself a significant civil liberties concern, totally apart from privacy and security implications that would result. Specifically, EFF raised concerns that encryption does not just support free expression, it is free expression. Notably absent is any examination of the rights of developers of cryptographic software, particularly given the role played by free and open source software in the encryption ecosystem. It ignores the legal landscape in the United States—one that strongly protects the principle that code (including encryption) is speech, protected by the First Amendment.

The report also underplays the international implications of any U.S. government mandate for U.S.-based providers. Currently, companies resist demands for plaintext from regimes whose respect for the rule of law is dubious, but that will almost certainly change if they accede to similar demands from U.S. agencies. In a massive understatement, the report notes that this could have “global implications for human rights.” We wish that the NAS had given this crucial issue far more emphasis and delved more deeply into the question, for instance, of how Apple could plausibly say no to a Chinese demand to wiretap a Chinese user’s FaceTime conversations while providing that same capacity to the FBI.

In any tech policy debate, expert advice is valuable not only to inform how to implement a particular policy but whether to undertake that policy in the first place. The NAS might believe that as the provider of “objective, science-based advice,” it isn’t equipped to weigh in on this sort of question. We disagree.

EFF and MuckRock Are Filing a Thousand Public Records Requests About ALPR Data Sharing

EFF - Fri, 02/16/2018 - 1:28pm

EFF and MuckRock have a launched a new public records campaign to reveal how much data law enforcement agencies have collected using automated license plate readers (ALPRs) and are sharing with each other.

Over the next few weeks, the two organizations are filing approximately 1,000 public records requests with agencies that have deals with Vigilant Solutions, one of the nation’s largest vendors of ALPR surveillance technology and software services. We’re seeking documentation showing who’s sharing ALPR data with whom. We are also requesting information on how many plates each agency scanned in 2016 and 2017 and how many of those plates were on predetermined “hot lists” of vehicles suspected of being connected to crimes.

You can see the full list of agencies and track the progress of each request through the Street-Level Surveillance: ALPR Campaign page on MuckRock.

As Easy As Adding a Friend on Facebook

“Joining the largest law enforcement LPR sharing network is as easy as adding a friend on your favorite social media platform.”

That’s a direct quote from Vigilant Solutions in its promotional materials for its ALPR technology. Through its LEARN system, Vigilant Solutions has made it possible for government agencies—particularly sheriff’s offices and police departments—to grant 24-7, unrestricted database access to hundreds of other agencies around the country.

ALPRs are camera systems that scan every license plate that passes in order to create enormous databases of where people drive and park their cars both historically and in real time. Collected en masse by ALPRs mounted on roadways and vehicles, this data can reveal sensitive information about people, such as where they work, socialize, worship, shop, sleep at night, and seek medical care or other services. ALPR allows your license plate to be used as a tracking beacon and a way to map your social networks.

Here’s the question: who is on your local police department’s and sheriff office’s ALPR friend lists?

Perhaps you live in a “sanctuary city.” There’s a very real chance local police are sharing ALPR data with Immigration & Customs Enforcement, Customs & Border Patrol, or one of their subdivisions.

Perhaps you live thousands of miles from the South. You’d be surprised to learn that scores of small towns in rural Georgia have round-the-clock access to your ALPR data. This includes towns like Meigs, which serves a population of 1,000 and did not even have full-time police officers until last fall.

In 2017, EFF and the Center for Human Rights and Privacy filed records requests with several dozen law enforcement agencies in California. We found that police departments were routinely sharing ALPR data with a wide variety of agencies that may be difficult to justify. Police often shared with the DEA, FBI, and U.S. Marshals—but they also shared with federal agencies with a less clear interest, such as the U.S. Forest Service, the U.S. Department of Veteran Affairs, and the Air Force base at Fort Eustis. California agencies were also sharing with public universities on the East Coast, airports in Tennessee and Texas, and agencies that manage public assistance programs, like food stamps and indigent health care. In some cases, the records indicate the agencies were sharing with private actors.

Meanwhile, most agencies are connected to an additional network called the National Vehicle Locator System (NVLS), which shares sensitive information with more than 500 government agencies, the identities of which have never been publicly disclosed.

Here are the data sharing documents we obtained in 2017, which we are seeking to update with our new series of requests.

We hope to create a detailed snapshot of the ALPR mass surveillance network linking law enforcement and other government agencies nationwide. Currently, the only entity that has the definitive list is Vigilant Solutions, which, as a private company, is not subject to state or federal public record disclosure laws. So far, the company has not volunteered this information, despite reaping many millions in tax dollars.

Until they do, we’ll keep filing requests.

For more information on ALPRs, visit EFF’s Street-Level Surveillance hub.

.

Federal Judge Says Embedding a Tweet Can Be Copyright Infringement

EFF - Thu, 02/15/2018 - 9:12pm

Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.

This case began when Justin Goldman accused online publications, including Breitbart, Time, Yahoo, Vox Media, and the Boston Globe, of copyright infringement for publishing articles that linked to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage (the photo was newsworthy because it showed Brady in the Hamptons while the Celtics were trying to recruit Kevin Durant). Goldman said those stories infringe his copyright.

Courts have long held that copyright liability rests with the entity that hosts the infringing content—not someone who simply links to it. The linker generally has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. This “server test,” originally from a 2007 Ninth Circuit case called Perfect 10 v. Amazon, provides a clear and easy-to-administer rule. It has been a foundation of the modern Internet.

Judge Katherine Forrest rejected the Ninth Circuit’s server test, based in part on a surprising approach to the process of embedding. The opinion describes the simple process of embedding a tweet or image—something done every day by millions of ordinary Internet users—as if it were a highly technical process done by “coders.” That process, she concluded, put publishers, not servers, in the drivers’ seat:

[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result.

She also argued that Perfect 10 (which concerned Google’s image search) could be distinguished because in that case the “user made an active choice to click on an image before it was displayed.” But that was not a detail that the Ninth Circuit relied on in reaching its decision. The Ninth Circuit’s rule—which looks at who actually stores and serves the images for display—is far more sensible.

If this ruling is appealed (there would likely need to be further proceedings in the district court first), the Second Circuit will be asked to consider whether to follow Perfect 10 or Judge Forrest’s new rule. We hope that today’s ruling does not stand. If it did, it would threaten the ubiquitous practice of in-line linking that benefits millions of Internet users every day.

Related Cases: Perfect 10 v. Google

The False Teeth of Chrome's Ad Filter.

EFF - Thu, 02/15/2018 - 9:00pm

Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed. 

Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination. 

The Coalition for Better Ads Lacks a Consumer Voice

The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.

This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.

But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus. 

Stopping the "Biggest Boycott in History"

Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers. 

Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.

What Difference Will It Make?

Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.

User Protection or Monopoly Power?

The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.

Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications.

There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)

Chrome Falls Behind on User Privacy 

This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns.

Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap.

(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.

Customs and Border Protection's Biometric Data Snooping Goes Too Far

EFF - Thu, 02/15/2018 - 8:21pm

The U.S. Department of Homeland Security (DHS), Customs and Border Protection (CBP) Privacy Office, and Office of Field Operations recently invited privacy stakeholders—including EFF and the ACLU of Northern California—to participate in a briefing and update on how the CBP is implementing its Biometric Entry/Exit Program.

As we’ve written before, biometrics systems are designed to identify or verify the identity of people by using their intrinsic physical or behavioral characteristics. Because biometric identifiers are by definition unique to an individual person, government collection and storage of this data poses unique threats to privacy and security of individual travelers.

EFF has many concerns about the government collecting and using biometric identifiers, and specifically, we object to the expansion of several DHS programs subjecting Americans and foreign citizens to facial recognition screening at international airports. EFF appreciated the opportunity to share these concerns directly with CBP officers and we hope to work with CBP to allow travelers to opt-out of the program entirely.

You can read the full letter we sent to CBP here.

Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report

EFF - Thu, 02/15/2018 - 10:45am
Independent Oversight, Privacy Protections Are Needed

San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today.

Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems.

This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA.

Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets.

But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rights that whites, older people, and men, respectively. And because of well-documented racially-biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants.

For both reasons, inaccuracies in facial recognition systems will disproportionately affect people of color.

“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”

Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes.

“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”

For the report:
https://www.eff.org/wp/law-enforcement-use-face-recognition

For more on face recognition:
https://www.eff.org/document/facial-recognition-one-pager

Contact:  JenniferLynchSenior Staff Attorneyjlynch@eff.org

Court Dismisses Playboy's Lawsuit Against Boing Boing (For Now)

EFF - Wed, 02/14/2018 - 7:48pm

In a win for free expression, a court has dismissed a copyright lawsuit against Happy Mutants, LLC, the company behind acclaimed website Boing Boing. The court ruled [PDF] that Playboy’s complaint—which accused Boing Boing of copyright infringement for linking to a collection of centerfolds—had not sufficiently established its copyright claim. Although the decision allows Playboy to try again with a new complaint, it is still a good result for supporters of online journalism and sensible copyright.

Playboy Entertainment’s lawsuit accused Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. In a February 2016 post, Boing Boing told its readers that someone had uploaded scans of the photos, noting they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing.

EFF, together with co-counsel Durie Tangri, filed a motion to dismiss [PDF] on behalf of Boing Boing. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The motion noted that another judge in the same district had recently dismissed a case where Quintin Tarantino accused Gawker of copyright infringement for linking to a leaked script in its reporting.

Judge Fernando M. Olguin’s ruling quotes the Tarantino decision, noting that:

An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement.

Given this standard, the court was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”

From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website. Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.

Related Cases: Playboy Entertainment Group v. Happy Mutants

Will Canada Be the New Testing Ground for SOPA-lite? Canadian Media Companies Hope So

EFF - Wed, 02/14/2018 - 1:33pm

A consortium of media and distribution companies calling itself “FairPlay Canada” is lobbying for Canada to implement a fast-track, extrajudicial website blocking regime in the name of preventing unlawful downloads of copyrighted works. It is currently being considered by the Canadian Radio-television and Telecommunications Commission (CRTC), an agency roughly analogous to the Federal Communications Commission (FCC) in the U.S.

The proposal is misguided and flawed. We’re still analyzing it, but below are some preliminary thoughts.

The Proposal

The consortium is requesting the CRTC establish a part-time, non-profit organization that would receive complaints from various rightsholders alleging that a website is “blatantly, overwhelmingly, or structurally engaged” in violations of Canadian copyright law. If the sites were determined to be infringing, Canadian ISPs would be required to block access to these websites. The proposal does not specify how this would be accomplished.

The consortium proposes some safeguards in an attempt to show that the process would be meaningful and fair. It proposes the affected websites, ISPs, and members of the public would be allowed to respond to any blocking request. It also suggests that any blocking request would not be implemented unless a recommendation to block were adopted by the CRTC, and any affected party would have the right to appeal to a court.

Fairplay argues the system is necessary because, according to Fairplay, unlawful downloads are destroying the Canadian creative industry and harming Canadian culture.

(Some of) The Problems

As Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa points out, Canada had more investment in film and TV production last year than any other time in history. And it’s not just investment in creative industries that is seeing growth: legal means of accessing creative content is also growing, as Bell itself recognized in a statement to financial analysts. Contrary to the argument pushed by the content industry and other FairPlay backers, investment and lawful film and TV services are growing, not shrinking. The Canadian film and TV industries don’t need website-blocking.

The proposal would require service providers to “disappear” certain websites, endangering Internet security and sending a troubling message to the world: it’s okay to interfere with the Internet, even effectively blacklisting entire domains, as long as you do it in the name of IP enforcement. Of course, blacklisting entire domains can mean turning off thousands of underlying websites that may have done nothing wrong. The proposal doesn’t explain how blocking is to be accomplished, but when such plans have been raised in other contexts, we’ve noted the significant concerns we have about various technological ways of “blocking” that wreak havoc on how the Internet works.

And we’ve seen how harmful mistakes can be. For example, back in 2011, the U.S. government seized the domain names of two popular websites based on unsubstantiated allegations of copyright infringement. The government held those domains for over 18 months. As another example, one company named a whopping 3,343 websites in a lawsuit as infringing on trademark and copyright rights. Without an opposition, the company was able to get an order that required domain name registrars to seize these domains. Only after many defendants had their legitimate websites seized did the Court realize that statements made about many of the websites by the rightsholder were inaccurate.  Although the proposed system would involve blocking (however that is accomplished) and not seizing domains, the problem is clear: mistakes are made, and they can have long-lasting effect. 

But beyond blocking for copyright infringement, we’ve also seen that once a system is in place to take down one type of content, it will only lead to calls for more blocking, including that of lawful speech. This raises significant freedom of expression and censorship concerns.

We’re also concerned about what’s known as “regulatory capture” with this type of system, the idea that the regulator often tends to align its interests with those of the regulated. Here, the system would be initially funded by rightsholders, would be staffed “part-time” by those with “relevant experience,” and would get work when rightsholders view it as a valuable system. These sort of structural aspects of the proposal have a tendency to cause regulatory capture. An impartial judiciary that sees cases and parties from across a political, social, and cultural spectrum helps avoid this pitfall.

Finally, we’re also not sure why this proposal is needed at all. Canada already has some of the strongest anti-piracy laws in the world. The proposal just adds complexity and strips away some of the protections that a court affords those who may be involved in legitimate business (even if the content owners don’t like those businesses).

These are just some of the concerns raised by this proposal. Professor Geist’s blog highlights more, and in more depth.

What you can do

The CRTC is now accepting public comment on the proposal, and has already received over 4,000 comments. The deadline is March 1, although an extension has been sought. We encourage any interested members of the public to submit comments to let the Commission know your thoughts. Please note that all comments are made public, and require certain personal information to be included.

Let's Encrypt Hits 50 Million Active Certificates and Counting

EFF - Wed, 02/14/2018 - 1:02pm

In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default.

Source: https://letsencrypt.org/stats/ as of February 14, 2018

Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to.

One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings.

50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names.

Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here.

Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites.

No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.

The Revolution and Slack

EFF - Wed, 02/14/2018 - 12:44pm

The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.  

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.

The Mismatch

Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.

We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.

Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.

We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.

First, The Good News

Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.

Slack also stores your data in encrypted form, which means that if it leaks or is stolen, it is not readable. This is excellent protection if you are worried about attacks and data breaches. It is not useful, however, if you are worried about governments or other entities putting pressure on Slack to hand over your information.

Risks With Slack In Particular

And now the downsides. These are things that Slack could change, and EFF has called on them to do so.

Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.

Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes

Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.

Risks With Third-Party Platforms in General

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.

Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.

Much of your sensitive information is not subject to a warrant requirement.  While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you. 

Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.

Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.

A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.

Making Slack as Secure as Possible

If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.

The lowest-hanging privacy fruit is to change a workspace’s retention settings. By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. Workspace admins have the ability set shorter retention periods, which can mean less content available for government requests or legal inquiries.

Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages. 

The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace

However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers.  And all of us must stand together to push changes to the law.

Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.

Companies Must Be Accountable to All Users: The Story of Egyptian Activist Wael Abbas

EFF - Tue, 02/13/2018 - 3:44pm

Egyptian journalist Wael Abbas holds a special distinction: Over the years, he’s experienced censorship at the hands of four of Silicon Valley’s top companies. Although more extreme, his story isn’t so different from that of the many individuals who, following a single misstep or mistake at the hands of a content moderator, find themselves unceremoniously removed from a social platform.

When YouTube was still fairly new, Abbas began posting videos depicting police brutality in his native Egypt to the platform. The award-winning journalist and anti-torture activist found utility in the global platform, which even then had massive reach. One of the videos he had posted even resulted in a rare conviction of police officers in Cairo. But in late 2007, he found that his account had been removed without warning. The reason? His content, often graphic in nature, had been receiving large numbers of complaints.

Rights activists rallied around Abbas and were able to convince YouTube to restore his account; his archive of videos were eventually restored. YouTube later adjusted its rules to be more permissive of violent content that is documentarian in nature. Around the same time, Abbas’ Yahoo! email account was shut down—and later restored—on accusations that he was spamming other users.

More recently, Abbas has faced off with Facebook over an erroneous content decision made by the company. In November 2017, Abbas was issued a 30-day suspension by Facebook for a post in which he named and accused an individual of running a scam and threatening other people. As a result of the suspension, Abbas was unable to post to Facebook or use Messenger or other platform tools. After we contacted the company the suspension was reversed and Abbas’s access restored.

In another, more recent instance, Abbas had an image removed from Facebook, and received only a vague notification stating:

You uploaded a photo that violates our Terms of Use, and this photo has been removed. Facebook does not allow photos that attack an individual or group, or that contain nudity, drug use, violence, or other violations of the Terms of Use. These policies are designed to ensure Facebook remains a safe, secure and trusted environment for all users, including the many children who use the site.

Although Facebook pointed to their policies, they did not identify to Abbas which of his photos had actually violated the Terms of Use, leaving him guessing as to what he’d done wrong. A Facebook spokesperson commented:

In most instances involving content removals, we send people a generic message to let them know that they've violated our Community Standards. We're in the process of trying to be more specific with our language so that people have a better understanding of why we've taken down their content and how can they avoid similar removals in the future.

Wael Abbas writes about his Twitter account being suspended

Abbas was able to hold on to his Facebook account, but with Twitter, he wasn’t so lucky. In December, he was suddenly suspended from the platform without warning or notification. His account, which was verified and had 350,000 followers, was described by Egyptian human rights activist Sherif Azer as “a live archive to the events of the revolution and till today one of few accounts still documenting human rights abuses in Egypt.” EFF contacted Twitter about the suspension, but the company did not respond to our query.

Platforms must be accountable to their users

Social media companies took great pride in the role they were said to have played in the 2011 Arab uprisings. But as a recent article from Middle East Eye points out, Egyptians are facing a significant increase in content takedowns on Facebook. The article asks the question: “Would those social media accounts which supported Egypt's uprisings in 2011 now be shut down?”

In fact, the most famous of those social media accounts—the page entitled “We Are All Khaled Said” that first called for protests on January 25, 2011—was actually shut down by Facebook in 2010, just a few months before the uprising. The page, which was later revealed to have been created by Google executive Wael Ghonim, was removed because Ghonim had been using a fake name, and only restored after US-based NGOs stepped in to help.

Similarly, Abbas was only able to have his suspension overturned after contacting EFF.  Verified Egyptian Reuters journalist Amina Ismail was able to get a Twitter suspension overturned through her contacts. Abbas and Ismail are both high-profile journalists, however—most users don’t have access to contacts at Silicon Valley’s top tech companies.

Wael Abbas's experience demonstrates the precarity of our online lives, and the dire need for platforms to institute transparent practices. As we recently wrote, social media platforms must notify users clearly when they violate a policy, and offer a clear path of recourse so that all users have an opportunity to appeal content decisions. Abbas's experience is the tip of the iceberg: for every prominent journalist documenting injustice who manages to get through their filters, how many more have lost the fight against the censors before they had a chance to reach a wider public?

It is vital that technology companies recognize the role they play in fostering free expression and act accordingly. To learn more about our efforts to hold companies accountable on freedom of expression, visit Onlinecensorship.org.

We Don’t Need New Laws for Faked Videos, We Already Have Them

EFF - Tue, 02/13/2018 - 1:39pm

Video editing technology hit a milestone this month. The new tech is being used to make porn. With easy-to-use software, pretty much anyone can seamlessly take the face of one real person (like a celebrity) and splice it onto the body of another (like a porn star), creating videos that lack the consent of multiple parties.

People have already picked up the technology, creating and uploading dozens of videos on the Internet that purport to involve famous Hollywood actresses in pornography films that they had no part in whatsoever.

While many specific uses of the technology (like specific uses of any technology) may be illegal or create liability, there is nothing inherently illegal about the technology itself. And existing legal restrictions should be enough to set right any injuries caused by malicious uses.

As Samantha Cole at Motherboard reported in December, a Reddit user named “deepfakes” began posting videos he created that replaced the faces of porn actors with other well-known (non-pornography) actors. According to Cole, the videos were “created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.”

Just over a month later, Cole reported that the creation of face-swapped porn, labeled “deepfakes” after the original Redditor, had “exploded” with increasingly convincing results. And an increasingly easy-to-use app had launched with the aim of allowing those without technical skills to create convincing deepfakes. Soon, a marketplace for buying and selling deepfakes appeared in a subreddit, before being taken off the site. Other platforms including Twitter, PornHub, Discord, and Gfycat followed suit. In removing the streams, each platform noted a concern that the people depicted in the deepfakes did not consent to their involvement in the videos themselves.

We can quickly imagine many terrible uses for this face-swapping technology, both in creating nonconsensual pornography and false accounts of events, and in undermining the trust we currently place in video as a record of events.

But there can be beneficial and benign uses as well: political commentary, parody, anonymization of those needing identity protection, and even consensual vanity or novelty pornography. (A few others are hypothesized towards the end of this article.)

The knee-jerk reaction many people have towards any new technology that could be used for awful purposes is to try and criminalize or regulate the technology itself. But such a move would threaten the beneficial uses as well, and raise unnecessary constitutional problems.

Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes.

What Laws Apply

If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.

On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily.

To win a false light lawsuit, a plaintiff—the person harmed by the deepfake, for example—must typically prove that the defendant—the person who uploaded the deepfake, for example—published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense, in such a way that would be highly offensive to a reasonable person, and caused the plaintiff mental anguish or suffering. It seems that in many situations the placement of someone in a deepfake without their consent would be the type of “highly offensive” conduct that the false light tort covers.

The Supreme Court further requires that in cases pertaining to matters of public interest, the plaintiff must also prove an intent that the audience believe the impression to be true. This is the actual malice requirement found in defamation law. 

False light is recognized as a legal action in about two-thirds of the states. It can be difficult to distinguish false light from defamation, and many courts treat them identically. The courts that treat them differently focus on the injury: defamation compensates for damage to reputation, false light compensates for being subject to offensiveness.  But of course, a plaintiff could sue for defamation if a deepfake has a natural tendency to damage their reputation.

The tort of Intentional Infliction of Emotional Distress (IIED) will also be available in many situations. A plaintiff can win an IIED lawsuit if they prove that a defendant—again, for example, a deepfake creator and uploader—intended to cause the plaintiff severe emotional distress by extreme and outrageous conduct, and that the plaintiff actually suffered severe emotional distress as a result of the extreme and outrageous conduct. The Supreme Court has found that where the extreme and outrageous conduct is the publication of a false statement and when the statement is about either a matter of public interest or a public figure, the plaintiff must also prove an intent that the audience believe the statement to be true, an analog to defamation law’s actual malice requirement. The Supreme Court has further extended the actual malice requirement to all statements pertaining to matters of public interest.

And to the extent deepfakes are sold or the creator receives some other benefit from them, they raise the possibility of right of publicity claims as well by those whose images are used without their consent.

Lastly, one whose copyrighted material–either the facial image or the source material into which the facial image is embedded–may have a claim for copyright infringement, subject of course to fair use and other defenses.

Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.

How Have Europe's Upload Filtering and Link Tax Plans Changed?

EFF - Tue, 02/13/2018 - 12:26pm

Although we have been opposing Europe's misguided link tax and upload filtering proposals ever since they first surfaced in 2016, the proposals haven't been standing still during all that time. In the back and forth between a multiplicity of different Committees of the European Parliament, and two other institutions of the European Union (the European Commission and the Council of the European Union), various amendments have been offered up in an attempt at political compromise. Unfortunately, the point at which these compromises seem to have landed still poses the same problems as before.

What Has Happened with the Link Tax?

Article 11 is its official designation, but "link tax" is a far better informal description of this proposal, which would impose a requirement for Internet platforms to pay money to news publishers for providing links to news articles, accompanied by a short summary of what they are linking to. This isn't a copyright, because the link tax is paid to the publisher rather than the author, and because it is payable even if the portion of the news article taken isn't copyright-protected, falls within a copyright exception, or is freely licensed.

It's unclear why this proposal wasn't abandoned a long time ago. A similar link tax in Spain resulted in the closure of the Spanish version of Google News, a German equivalent has also been deemed a dismal failure, and both small publishers and even a European Commission-funded study have slammed the proposal. Nevertheless as of February 2018, it remains firmly on the table, with virtually nothing to sweeten the thoroughly rotten deal that it offers to Internet platforms and publishers alike.

The most recent attempt at compromise comes in a discussion paper [PDF] from the Bulgarian Council Presidency, prepared as input for a meeting of the Council's Intellectual Property Working Party that was held on February 12. The paper proposes only minor tweaking to the European Commission's original text, such as excluding individual Internet users from liability for the tax, and carving out "individual words or very short excerpts of text" from its scope, but without specifying what "very short excerpts" actually means. 

The discussion paper also briefly acknowledges the alternative proposal of dropping the link tax altogether, and instead addressing publishers' concerns without creating any new copyright-like impost. This alternative proposal would create a legal presumption that news publishers are entitled to enforce the existing copyrights in news articles written by their journalists. If Internet platforms are reproducing such large parts of news articles that permission from the copyright owner is required, this would enable the publishers to negotiate directly with those platforms to license that use. This is the only sensible compromise that can be made to the Article 11 proposal, but it is one that the Bulgarian Presidency unfortunately gives short shrift.

What has Happened with Upload Filtering?

The same discussion paper also tinkers around the edges of the upload filtering mandate, without addressing the fundamental dangers that it continues to pose to freedom of expression online. For those who came in late, the European Commission's initial upload filter proposal,  formally designated as Article 13, would require Internet platforms to put in place costly and ineffective automatic filters to prevent copyright-infringing content from being uploaded by users, creating a kind of robotic censorship regime.

What has changed since then? Not much. The Bulgarian Presidency proposes being slightly more specific about what kinds of online platforms are the target of the measure ("online content sharing services"). It also proposes introducing a new, expansive definition of "communication to the public"; an exclusive right reserved to copyright holders in Europe that had previously only been defined by way of a complicated series of court decisions. By deeming an Internet platform to be engaged in "communication to the public" whenever it allows a user to upload a copyright-protected work for sharing, the Bulgarian Presidency aims to justify excluding that platform from the copyright safe harbor that the existing E-Commerce Directive provides.

The only other change worth noting is that the proposal is now more equivocal about whether Internet platforms would actually have to install automated upload filters, or whether it would be sufficient for them to prevent the uploading of copyright-infringing material in some other way. But as European Digital Rights (EDRi) has cogently pointed out, this is a distinction without a difference.

To comply with Article 13 and to avoid liability under the E-Commerce Directive (per the Bulgarian Presidency's amendment), platforms are required to "take effective measures to prevent the availability on its services of ... unauthorized works or other subject-matter identified by the rightholders," and if such works do nevertheless appear on the platform, must "act expeditiously to remove or disable access to the specific unauthorized work or other subject matter and ... take steps to prevent its future availability."

There is no way in which platforms could possibly comply with this directive other than by agreeing to monitor all of the content they accept, either manually or automatically. By daring not to speak this uncomfortable truth, the Bulgarian Presidency skirts around the fact that such a general monitoring obligation would contravene both Article 14 of the E-Commerce Directive and European human rights law. But that kind of clever circumlocution can't hide the repressive nature of this censorship proposal, and does nothing to improve on the flaws of the original text.

What Can You Do?

The fight against Article 11 and Article 13 is entering its closing days. That makes every voice that we can raise in opposition to these harmful proposals more important than ever before. European voices are best placed to convince European policymakers of the harm that their proposals would wreak upon European businesses and users. Thankfully, our allies in Europe are on the case, and if you are European or have colleagues or friends in Europe, here are the links you need to contact your representatives and speak out against their misguided plans:

  • Mozilla has put together an awesome call-in tool and response guide, which makes it easy to identify your specific concerns as a technologist, creator, innovator, scientist or librarian. You can also read more on Mozilla's site about how all of these category of user, and more, are affected by the Article 11 and Article 13 proposals, along with some of the other more obscure (but still important) provisions of the broader Digital Single Market Directive.
  • A coalition called Create.Refresh have a brilliant, viral campaign that encourages creators to create and share their own works that address the problems inherent in restrictive filtering systems, such as those that Article 13 would effectively mandate.
  • OpenMedia's Save the Link network has updated their click-to-call website this month with a brand new petition on Article 11 that enables you to identify yourself as one of the impacted groups, from a drop down menu on the new page. If you are a librarian, software developer, creator, researcher, or journalist, you'll be able to demonstrate how the link tax proposals are harmful to you specifically.

As you can see, there are many options for you to get involved in this fight—and with the final Committee vote in the European Parliament coming up on March 26-27, now is the best time to do so. If we lose this one, the link tax and upload filtering mandates could be here to stay, and the Internet as we know it will never be the same.

Internet Users Spoke Up To Keep Safe Harbors Safe

EFF - Mon, 02/12/2018 - 9:04pm

Today, we delivered a petition to the U.S. Copyright Office to keep copyright’s safe harbors safe.  We asked the Copyright Office to remove a bureaucratic requirement that could cause websites and Internet services to lose protection under the Digital Millennium Copyright Act (DMCA). And we asked them to help keep Congress from replacing the DMCA safe harbor with a mandatory filtering law. Internet users from all over the U.S. and beyond added their voices to our petition.

Under current law, the owners of websites and online services can be protected from monetary liability when their users are accused of infringing copyright through the DMCA “safe harbors.” In order to take advantage of these safe harbors, owners must meet many requirements, including participating in the notorious notice-and-takedown procedure for allegedly infringing content. They also must register an agent—someone who can respond to takedown requests—with the Copyright Office.

The DMCA is far from perfect, but provisions like the safe harbor allow websites and other intermediaries that host third-party material to thrive and grow without the constant threat of massive copyright penalties. Without safe harbors, small Internet businesses could face bankruptcy over the infringing activities of just a few users.

Now, a lot of those small sites risk losing their safe harbor protections. That’s because of the Copyright Office’s rules for registering agents. Those registrations used to be valid as long as the information was accurate. Under the Copyright Office’s new rules, website owners must renew their registrations every three years or risk losing safe harbor protections. That means that websites can risk expensive lawsuits for nothing more than forgetting to file a form. As we’ve written before, because the safe harbor already requires websites to submit and post accurate contact information for infringement complaints, there’s no good reason for agent registrations to expire. We’re also afraid that it will disproportionately affect small businesses, nonprofits, and hobbyists, who are least able to have a cadre of lawyers at the ready to meet bureaucratic requirements.

Many website owners have signed up under the Copyright Office’s new agent registration system, which is designed to send reminder emails when the three-year registrations are set to expire. While the new registration system is a vast improvement over the old paper filing system, the expiration requirement is unnecessary and dangerous.

We explained these problems in our petition, and we also explained how the DMCA faces even greater threats. If certain major media and entertainment companies get their way, it will become much more difficult for websites of any size to earn their safe harbor status. That’s because those companies’ lobbyists are pushing for a system where platforms would be required to use computerized filters to check user-uploaded material for potential copyright infringement.

Requiring filters as a condition of safe harbor protections would make it much more difficult for smaller web platforms to get off the ground. Automated filtering technology is expensive—and not very good. Even when big companies use them, they’re extremely error-prone, causing lots of lawful speech to be blocked or removed. A filtering mandate would threaten smaller websites’ ability to host user content at all, cementing the dominance of today’s Internet giants.

If you run a website or online service that stores material posted by users, make sure that you comply with the DMCA’s requirements. Register a DMCA agent through the Copyright Office’s online system, post the same information on your website, and keep it up to date. Meanwhile, we’ll keep telling the Copyright Office, and Congress, to keep the safe harbors safe.

Imprisoned Blogger Eskinder Nega Won't Sign a False Confession

EFF - Mon, 02/12/2018 - 8:43pm

Online publisher and blogger Eskinder Nega has been imprisoned in Ethiopia since September 2011 for the "crime" of writing articles critical of his government. He is one of the longest-serving prisoners in EFF's Offline casefile of writers and activists unjustly imprisoned for their work online.

Now a chance he may finally be freed has been thrown into doubt because of the Ethiopian authorities' outrageous demand that he sign a false confession before being released.

The Ethiopian Prime Minister, Hailemariam Desalegn, announced in January surprise plans to close down the notorious Maekelawi detention center and release a number of prisoners. The Prime Minister said that the move was intended to "foster national reconciliation."

While Ethiopia's own officials have declined to call the recipients of the amnesty "political prisoners," the bulk of the candidates named so far for release are either opposition politicians and activists, or others, like Eskinder, caught up in previous crackdowns on dissent and free speech.

Despite the government's apparent desire to use the release to moderate tensions in Ethiopia, prison officials have undermined its message—and Eskinder's chance at freedom—by requiring him to sign a false confession before his release.

The document, given to Eskinder without warning last week, included a claim that Eskinder was a member of Ginbot 7, a group the government has previously declared a terrorist organization. Eskinder refused to sign the document, and was subsequently returned to his cell, even as other prisoners were being released. The Committee to Protect Journalists subsequently told Quartz Africa that Eskinder was asked to sign the form a second time over the weekend.

EFF continues to follow Eskinder's case closely, and urges the Ethiopian government to live up to its promise of a new era of reconciliation and renewal by returning Eskinder to his friends and family, unconditionally and immediately.

Oregon Steps Up to the Plate on Network Neutrality This Month

EFF - Mon, 02/12/2018 - 2:19pm

It should not be surprising that arguably the biggest mistake in Internet policy history is going to invoke a vast political response. Since the FCC repealed federal Open Internet Order in December, many states have attempted to fill the void. With a new bill that reinstates net neutrality protections, Oregon is the latest state to step up.

Oregon’s Majority Leader Jennifer Williamson recently announced her intention to fight to restore much of what the FCC repealed last December under its so-called “Restoring Internet Freedom Order.” Her legislation, H.B. 4155, responds to the FCC’s decision by requiring that any ISP that receives funds from the state to adhere to net neutrality principles—not blocking or throttling content or prioritizing its own content over that of competitors, for example.

If you’re an Oregonian, tell your state representative to act to restore net neutrality.

Oregon is following in what is clearly a trend of state legislatures and executives acting to protect their citizens’ digital rights where the federal government has abdicated responsibility. To date, 17 states have introduced network neutrality legislation and four Governors have issued Executive Orders (Montana, New York, New Jersey, and Hawaii).

The national response to the FCC’s decision to abandon its role as the consumer protection agency overseeing cable and telephone companies is to be expected. It is wildly unpopular with voters of all political leanings; 83% of voters overall including 3 out of 4 Republican voters opposed the FCC decision. Yet despite millions of Americans submitting comments to the FCC to oppose the decision, they were promptly ignored in favor of the interests of AT&T, Verizon, and Comcast. Where else should this vast swath of the American public go if not their state and local representation?

And while both Verizon and their association, the CTIA, made last minute requests to the FCC to try to prevent state privacy and network neutrality laws, they are not going to be successful. Their problem is the plan to eviscerate the law that empowers the FCC also disables the agency’s ability to block state laws. In other words, they cannot have it both ways.

While the FCC's order did contain a lot of words about how states cannot pass their own network neutrality laws, it did so without citing any specific legal authority. We remain skeptical that the FCC itself has that power. And while states still have to navigate the Commerce Clause, EFF has provided guidance on how to do that.

Notably, states and local government and in particular governors have caught onto the obvious weakness in the FCC’s authority and have acted. EFF will continue working to support the states in their effort to protect a free and open Internet until we are able to fully restore the protections we once had at the federal level.

Take Action

Tell your state representatives to support H.B. 4155 and restore the neutral net

The CLOUD Act: A Dangerous Expansion of Police Snooping on Cross-Border Data

EFF - Thu, 02/08/2018 - 9:09pm

This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.

The Clarifying Overseas Use of Data (CLOUD) Act expands American and foreign law enforcement’s ability to target and access people’s data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access “the contents of a wire or electronic communication and any record or other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider—like Google, Facebook, or Snapchat—to hand over a user’s content and metadata, even if it is stored in a foreign country, without following that foreign country’s privacy laws.[1]

Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire users’ data stored in the other country, without following each other’s privacy laws.

For example, because U.S.-based companies host and carry much of the world’s Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws.

This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called “Microsoft Ireland”), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.

Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years.

The CLOUD Act and the US-UK Agreement

The CLOUD Act’s proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.

In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department’s revamped bill.

The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ’s 2017 bill. None of EFF’s concerns have been addressed. The legislation still:

  • Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
  • Fails to require foreign law enforcement to seek individualized and prior judicial review.
  • Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
  • Fails to place adequate limits on the category and severity of crimes for this type of agreement.
  • Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)

The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the “public” from the unlawful disclosure of their personal communications.

An Expansion of U.S. Law Enforcement Capabilities

The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information – meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States.

EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case.

When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government’s laws. To do so, the company must object within 14 days, and undergo a complex “comity” analysis – a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.

Failure to Support Mutual Assistance

Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.

It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.

While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.

The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.

A growing chorus of privacy groups in the United States opposes the CLOUD Act’s broad expansion of U.S. and foreign law enforcement’s unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities “in the wrong direction, by sacrificing digital rights.” CDT and Access Now also oppose the bill.

Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a “good start.” Nor does it do a “remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security.” Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies.

Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people’s privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.

[1] The text of the CLOUD Act does not limit U.S. law enforcement to serving orders on U.S. companies or companies operating in the United States. The Constitution may prevent the assertion of jurisdiction over service providers with little or no nexus to the United States.

Related Cases: In re Warrant for Microsoft Email Stored in Dublin, Ireland

IPR Process Saves 80 Companies From Paying For a Sports-Motion Patent

EFF - Wed, 02/07/2018 - 7:17pm

The importance of the US Patent Office’s “inter partes review” (IPR) process was highlighted in dramatic fashion yesterday. Patent appeals judges threw out a patent [PDF] that was used to sue more than 80 companies in the fitness, wearables, and health industries.

US Patent No. 7,454,002 was owned by Sportbrain Holdings, a company that advertised a kind of ‘smart pedometer’ as recently as 2011. But the product apparently didn’t take off, and in 2016, Sportbrain turned to patent lawsuits to make a buck.

A company called Unified Patents challenged the ’002 patent by filing an IPR petition, and last year, the Patent Office agreed that the patent should be reviewed. Yesterday, the patent judges published their decision, canceling every claim of the patent.

The ’002 patent describes capturing a user’s “personal data,” and then sharing that information with a wireless computing device and over a network. It then analyzes the data and provides feedback.

After reviewing the relevant technology, a panel of patent office judges found there wasn’t much new to the ’002 patent. Earlier patents had already described collecting and sharing various types of sports data, including computer-assisted pedometers and a system that measured a skier’s “air time.” Given those earlier advances, the steps of the Sportbrain patent would have been obvious to someone working in the field. The office cancelled all the claims.

That means the dozens of different companies sued by Sportbrain won’t have to each spend hundreds of thousands of dollars—potentially millions—to defend against a patent that, the government now acknowledges, never should have been granted in the first place.

A Critical Tool for Innovators

Bad patents like the one asserted by Sportbrain are a drain on the innovation economy, especially for small businesses. But the damage that could be caused by such patents was much worse before the advent of IPRs.

The IPR process has proven to be the most effective part of the 2012 America Invents Act. In most cases, the IPR process is far more efficient than federal courts when it comes to evaluating a patent to figure out if it’s truly new and non-obvious.

IPRs have other advantages for small companies. Often, companies that get sued or threatened by patent trolls will end up paying a licensing fee, even though they don’t think the patents are legitimate. Through the IPR process, defendants can band together to file IPRs.  That’s enabled the success of membership-based for-profit companies like RPX and Unified Patents—in fact, it was member-funded Unified that filed the petition which shut down the Sportbrain Holdings patent.

The IPR process also enables non-profits like EFF to fight bad patents. That’s how EFF was able to knock out the Personal Audio “podcasting” patent. The petition was paid for by the more than 1,000 donors who gave to our “Save Podcasting” campaign. Last year, EFF’s victory in that case was upheld by a federal appeals court.

But the IPR process could be in danger. Senator Chris Coons has twice proposed legislation (the STRONG Patents Act and the STRONGER Patents Act) that would gut the IPR system. EFF has opposed these bills. Other opponents of IPRs have taken their complaints to the courts. One company has asked the Supreme Court to declare the process unconstitutional. This case, Oil States, will decide the future of IPRs. We’ve submitted a brief explaining why we think the process of reviewing patents at the Patent Office is not only constitutional, it’s good public policy. We hope both Congress and the high court see their way to upholding this critical tool that saved 80 companies from damaging litigation—and that was just yesterday.

Related Cases: EFF v. Personal Audio LLC

John Perry Barlow, Internet Pioneer, 1947-2018

EFF - Wed, 02/07/2018 - 5:21pm

With a broken heart I have to announce that EFF's founder, visionary, and our ongoing inspiration, John Perry Barlow, passed away quietly in his sleep this morning. We will miss Barlow and his wisdom for decades to come, and he will always be an integral part of EFF.

It is no exaggeration to say that major parts of the Internet we all know and love today exist and thrive because of Barlow’s vision and leadership. He always saw the Internet as a fundamental place of freedom, where voices long silenced can find an audience and people can connect with others regardless of physical distance.

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus the former: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”

Barlow’s lasting legacy is that he devoted his life to making the Internet into “a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth . . . a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”

In the days and weeks to come, we will be talking and writing more about what a extraordinary role Barlow played for the Internet and the world. And as always, we will continue the work to fulfill his dream.

Pages