Electronic Freedom Foundation
EFF has asked a federal court to rule in its favor in a lawsuit we filed against an Australian company that sought to use foreign law to censor us from expressing our opinion about its patent. While the company, Global Equity Management (SA) Pty Ltd (GEMSA,) knows its way around U.S. courts—having filed dozens of lawsuits against big tech companies claiming patent infringement—it has failed to respond to ours. Today we asked for a default judgment, which if granted means we win the case.
It all started when GEMSA’s patent litigation was featured in our June 2016 blog series “Stupid Patent of the Month.” The company wrote to EFF accusing us of “false and malicious slander.” It subsequently filed a lawsuit and obtained an injunction from a South Australia court ordering EFF to take down the blog post and blocking us from ever talking about any of its intellectual property.
We have not removed the post. The South Australian injunction can’t be enforced in the U.S. under a 2010 federal law that took aim against “libel tourism,” a practice by which plaintiffs—often billionaires, celebrities, or oligarchs—sued U.S. writers and academics in countries like England where it was easier to win a defamation case.
The Securing the Protection of Our Enduring and Established Constitutional Heritage Act (SPEECH Act) says foreign orders aren’t enforceable in the United States unless they are consistent with the free speech protections provided by the U.S. and state constitutions, as well as state law. Our lawsuit, filed in U.S. District Court, Northern District of California, maintains that GEMSA’s injunction, which seeks to silence expression of an opinion, would never survive scrutiny under the First Amendment in the United States and should therefore be declared unenforceable. We stood ready to defend our right to express constitutionally protected speech.
GEMSA, which has three pending patent lawsuits in in the Northern District of California, had until May 23 to respond to our case. That day came and went without a word. We can’t speculate as to why GEMSA hasn’t responded. To get a default judgment, we need to show that not only has GEMSA failed to answer our claims but also, regarding our claim that the South Australia injunction is unenforceable in the U.S., the law is on our side.
We believe that we should prevail. The law does not allow companies or individuals to make an end run around the First Amendment by finding a judge in another country to sign an injunction that censors speech in the U.S. The law the Australian court applied to grant the injunction didn’t provide as much protection for EFF’s speech as American law, which means it’s unenforceable under the SPEECH Act. Additionally, the injunction is unconstitutional under American law as it prohibits all future speech by EFF about any of GEMSA’s patents. Such prohibitions are also known as prior restraints, and are allowed only in the rarest of circumstances, none of which apply here.
Our laws also don’t allow plaintiffs to be left under a cloud of uncertainty as to their ability to speak publicly about something as important as patent litigation and reform. The Australian injunction states that failure to comply could result in the seizure of EFF’s assets and prison time for its officers. GEMSA attorneys have threatened to take the Australian injunction to American search engine companies to deindex the blog post, making the post harder to find online.
The court should set the record straight and grant our request for a default judgment. Our laws call for no less.
The detention of a group of human rights defenders in Turkey for daring to learn about digital security and encryption continued last week with a brief appearance of the accused in an Istanbul court. Six were returned to jail, and four released on bail. In an additionally absurd twist, the four released activists were named in new detention orders on Friday, and are now being re-arrested.
Among those currently being held in jail are Ali Gharavi and Peter Steudtner, digital security trainers from Sweden and Germany, who had traveled to Turkey to provide online privacy advice for a conference of human rights defenders. The meeting was raided by Turkish police on July 5, and appears to be the sole basis for the prosecution.
The court charged Gharavi and Steudtner with "committing crimes in the name of a terrorist organization without being a member." Their co-defendants include Idil Eser, the Director of Amnesty Turkey, Veli Acu and Günal Kurşun of the Human Rights Agenda Association, and Özlem Dalkıran of the Helsinki Citizens’ Assembly. Four others were released on bail, but new detention orders against them were announced on Friday, with two re-arrested over the weekend.
Gharavi and Steudtner have worked for many years in the global human rights community, providing advice about digital security and online well-being. Ali helped EFF with its Surveillance Self-Defence Guides, and has held key technology roles at the Center for Victims of Torture and Tactical Tech. Steudtner's expertise was in holistic security, which combined technical training with his pacifist, non-violent principles.
When asked about the arrests, Turkey's President Recep Tayipp Erdogan said that the group had "gathered for a meeting which was a continuation of July 15," referencing the date of the attempted coup against him in 2016. The government has used the coup as a justification for the subsequent mass arrests of over 50,000 people including journalists, academics, judges and, most recently, technologists.
Strong digital security helps everyone; learning about encryption is not a sign of criminal activity. The Turkish authorities and media have continued, nonetheless, to tie the use of secure communications tools to the coup. A report in the conservative Islamist paper Yeni Akit declared that the detainees had secret government documents, and used the mobile communications app "ByLock" to stay in contact with groups connected to the coup. ByLock is a known insecure app that is largely unknown outside of Turkey and has been widely criticised by digital security experts. It is profoundly unlikely that Gharavi or Steudtner used it. Use of ByLock was also the sole reason the Turkish police gave for the arrest of Amnesty's Chair, Taner Kiliç, last month.
The condemnation of the Turkish courts' actions has been swift. U.S. State Department spokesperson Heather Nauert said the U.S. "strongly condemns the arrest of six respected human rights activists and calls for their immediate release," and urged Turkey to drop the charges, which it said undermine the country's rule of law.
Eliot Engel, the U.S. House of Representatives' ranking member on the Foreign Affairs committee, said that "The arrest of these brave men and women is unacceptable, and the latest example of the erosion of democracy in Turkey... I call on Turkish authorities to release Idil Eser and her fellow activists without delay or condition, and Secretary Tillerson must make this a top priority in his engagement with Turkey’s government."
Sweden's Foreign Minister, Margot Wallstrom has called for the release of Gharavi, who is a Swedish national. "It is our understanding that Gharavi was in Turkey to participate in a peaceful seminar about freedom of the internet and we have urged Turkey to quickly clarify the grounds for the accusations against him," she said in a statement.
Germany, Steudtner's home country, has taken an even more forceful line. "We are strongly convinced that this arrest is absolutely unjustified," German Chancellor Angela Merkel said, according to the DPA news agency. Germany's Foreign Minister Sigmar Gabriel cut short a vacation to deal with the case, and summoned the Turkish Ambassador in Berlin, who was told "without diplomatic pleasantries" of Germany's expectation that Steudtner and his colleagues should be released immediately. Gabriel later warned that "the case of Peter Steudtner shows that German citizens are no longer safe from arbitrary arrests," and suggested that his continuing detention will lead to a "re-orienting" of German's policy toward Turkey.
The baseless prosecution of these human rights defenders, including Peter and Ali, two innocent technologists from allies of Turkey, highlights the decline of Turkey's democratic institutions. We continue to urge the Turkish authorities to listen to a chorus of countries and international organizations, and to free all ten victims of this profound injustice immediately.
Sixteen countries from Asia-Pacific are meeting in Hyderabad for the 19th round of the Regional Comprehensive Economic Partnership (RCEP) which takes place in India from 18-28 July, 2017. EFF is participating to advocate for improved transparency and openness in the negotiations, and to express our concerns about possible new rules on intellectual property and ecommerce that some countries are proposing for the agreement.
RCEP is a free trade agreement (FTA) aimed at broadening regional economic integration and liberalising trade and investment between the 10 ASEAN economies and its trading partners including Australia, China, India, Japan, Korea, and New Zealand. The total population covered by RCEP exceeds 3 billion, and with the combined GDP of about US$ 17 trillion accounting for about 40% of the world’s trade makes RCEP the biggest mega-regional trade agreement that is under negotiation.
The idea of RCEP was first introduced at an ASEAN Summit in 2011 and formal negotiations were launched in 2012. Over the last five years, the scope of the agreement has grown to include commitments for trade in goods and services, boosting economic and technical cooperation, and intellectual property. Worryingly, discussions on ecommerce issues including rules on software, data flows, and regulatory standards that have not been addressed in other trade mechanisms are also being included in the RCEP negotiations.
Reports suggest that Japan, Australia, South Korea, and New Zealand have been pushing for binding commitments from the RCEP members on ecommerce. A separate working group on ecommerce (WGEC) has been established with the aim of formalising a chapter on ecommerce in the final agreement. The agreement and the issues being negotiated are being kept confidential, however a few chapters drafts have been leaked including the ‘Terms of Reference (TOR)’ for the WGEC. WGEC members are hopeful of concluding the deal by year end which would include ‘liberalisation commitments’ and norms for ecommerce including provisions on investment, dispute settlement and competition.
The proposed elements for the TOR (for negotiations) are understood to include domestic regulatory frameworks for market access, customs duties on electronic transmission, non-discriminatory treatment of digital products, paperless trading, electronic signatures, digital certificates and online consumer protection issues such as storage and transfer of personal data protection and spam.
Controversial issues such as prohibition on requirements concerning the location of computing facilities and allowing cross-border transfer of information by electronic means are also expected to be included within the scope of the chapter. Further, countries including Australia and Japan have proposed making a permanent commitment to zero duties on digital transmissions, and prohibiting rules requiring on compulsory disclosure of source codes.
Given the secrecy of the negotiations, the lack of opportunities for public input in the process, and the complexity of issues involved, EFF convened an expert panel on ecommerce issues in the RCEP negotiations in Hyderabad. The public meeting was organised in partnership with the National Institute of Public Finance and Policy (NIPFP) and the National Law University of Law, Hyderabad. Speakers included Professor Ajay Shah (NIPFP), Parminder Jeet Singh (ItforChange) and Professor VC Vivekananda (Bennett University).
Panelists raised several issues including ensuring non-discriminatory treatment of digital products transmitted electronically and the need for guaranteeing that these products will not face government-sanctioned discrimination based on the nationality or territory in which the product is produced. Security risks associated with the prohibition of source code disclosure, and the costs of imposing measures that restrict cross-border data flows and or require the use or installation of local computing facilities were also raised by panelists.
The event was a success with negotiators from nine countries including Vietnam, Japan, Australia, New Zealand, Laos, Cambodia, South Korea and Thailand showing up for the meeting. Given that access for users at such negotiations is restricted the large number of negotiators showing interest was very encouraging. Understandably, the negotiators did not ask questions or participate in the discussions, however their interest in the issues is evident in WGEC members turning up for the panel. This is definitely an improvement on the previous negotiations where there has been limited participation from negotiators at similar events. We also received feedback that the WGEC would like to see specific issues being discussed in-depth including positive commitments that could be included.
EFF is maintaining a cautious and critical stance on the inclusion of e-commerce rules in RCEP, and the inclusion of similar rules in NAFTA, simultaneously being negotiated on the other side of the world. While it is possible to deal with e-commerce in a trade agreement in a balanced way that respects users’ rights, this is made unnecessarily difficulty when those rules are being negotiated in secret. Nonetheless, until a better way of engaging with negotiators exists, EFF will continue to provide our input through unofficial side events and bilateral meetings, because this is the best way that we can stand up for your rights in what remains an unfair and secretive process
The failed Trans-Pacific Partnership (TPP) was a lesson in what happens when trade agreements are negotiated in secret. Powerful corporations can lobby for dangerous, restrictive measures, and the public can't effectively bring balance to the process. Now, some members of Congress are seeking to make sure that future trade agreements, such as the renegotiated version of NAFTA, are no longer written behind closed doors. We urge you to write your representative and ask them to demand transparency in trade.
Representative Debbie Dingell (D-MI) has today introduced the Promoting Transparency in Trade Act (H.R. 3339) [PDF], with co-sponsorship by Representatives Laura DeLauro (D-CT), Tim Ryan (D-OH), Marcy Kaptur (D-OH), Jamie Raskin (D-MD), Keith Ellison (D-MI), Raúl Grijalva (D-AZ), John Conyers (D-MI), Jan Schakowsky (D-IL), Louise Slaughter (D-NY), Mark DeSaulnier (D-CA), Dan Lipinski (D-IL), Chellie Pingree (D-ME), Brad Sherman (D-CA), Jim McGovern (D-MA), Rick Nolan (D-MN), and Mark Pocan (D-WI). Representative Dingell describes the bill as follows:
The Promoting Transparency in Trade Act would require the U.S. Trade Representative (USTR) to publicly release the proposed text of trade deals prior to each negotiating round and publish the considered text at the conclusion of each round. This will help bring clarity to a process that is currently off limits to the American people. Actively releasing the text of trade proposals will ensure that the American public will be able to see what is being negotiated and who is advocating on behalf of policies that impact their lives and economic well-being.
We wholeheartedly agree. Indeed, these are among the recommendations that EFF has been pushing for for some time, most recently at a January 2017 roundtable on trade transparency that we held with stakeholders from industry, civil society, and government. That event resulted in a set of five recommendations on the reform of trade negotiation processes that were endorsed by the Sunlight Foundation the Association of Research Libraries, and OpenTheGovernment.org.
A previous version of the Promoting Transparency in Trade Act was introduced into the previous session of Congress, but died in committee. Compared with that version, this latest bill is an improvement because it requires the publication of consolidated draft texts of trade agreements after each round of negotiations, which the previous bill did not.
Another of our recommendations that is reflected in the bill is to require the appointment of an independent Transparency Officer to the USTR. Currently, the Transparency Officer is the USTR's own General Counsel, which creates an conflict of interest between the incumbent's duty to defend the office's current transparency practices, and his or her duties to the public to reform those practices. An independent officer would be far more effective at pushing necessary reforms at the office.
The Promoting Transparency in Trade Act faces challenging odds to make it through Congress. Its next step towards passage into law will be its referral to the House Committee on Ways and Means, and probably its Subcommittee on Trade, which will decide whether the bill will be sent to the House of Representatives for a vote. The Senate will also have to vote on the bill before it becomes law. The more support that we can build for the bill now, the better its chances for surviving this perilous process.
Passage of this bill may be the best opportunity that we'll have to avoid a repetition of the closed, secretive process that led to the TPP. With the renegotiation of NAFTA commencing with the first official round of meetings in Washington, D.C. next month, it's urgent that these transparency reforms be adopted soon. You can help by writing to your representative in Congress and asking them to support the bill in committee.
The International Federation of Library Associations and Institutions (IFLA) has called on the World Wide Web Consortium (W3C) to reconsider its decision to incorporate digital locks into official HTML standards. Last week, W3C announced its decision to publish Encrypted Media Extensions (EME)—a standard for applying locks to web video—in its HTML specifications.
IFLA urges W3C to consider the impact that EME will have on the work of libraries and archives:
While recognising both the potential for technological protection measures to hinder infringing uses, as well as the additional simplicity offered by this solution, IFLA is concerned that it will become easier to apply such measures to digital content without also making it easier for libraries and their users to remove measures that prevent legitimate uses of works.
Technological protection measures […] do not always stop at preventing illicit activities, and can often serve to stop libraries and their users from making fair uses of works. This can affect activities such as preservation, or inter-library document supply. To make it easier to apply TPMs, regardless of the nature of activities they are preventing, is to risk unbalancing copyright itself.
IFLA’s concerns are an excellent example of the dangers of digital locks (sometimes referred to as digital rights management or simply DRM): under the U.S. Digital Millennium Copyright Act (DMCA) and similar copyright laws in many other countries, it’s illegal to circumvent those locks or to provide others with the means of doing so. That provision puts librarians in legal danger when they come across DRM in the course of their work—not to mention educators, historians, security researchers, journalists, and any number of other people who work with copyrighted material in completely lawful ways.
Of course, as IFLA’s statement notes, W3C doesn’t have the authority to change copyright law, but it should consider the implications of copyright law in its policy decisions: “While clearly it may not be in the purview of the W3C to change the laws and regulations regulating copyright around the world, they must take account of the implications of their decisions on the rights of the users of copyright works.”
EFF is in the process of appealing W3C’s controversial decision, and we’re urging the standards body to adopt a covenant protecting security researchers from anti-circumvention laws.
Three European Parliament Committees met during the week of July 10, to give their input on the European Commission's proposal for a new Directive on copyright in the Digital Single Market. We previewed those meetings last week, expressing our hope that they would not adopt the Commission's harmful proposals. The meetings did not go well.
All of the compromise amendments to the Directive proposed by the Committee on Culture and Education (CULT) that we previously catalogued were accepted in a vote of that committee, including the upload filtering mechanism, the link tax, the unwaivable right for artists, and the new tax on search engines that index images. Throwing gasoline on the dumpster fire of the upload filtering proposal, CULT would like to see cloud storage services added to the online platforms that are required to filter user uploads. As for the link tax, they have offered up a non-commercial personal use exemption as a sop to the measure's critics, though it is hard to imagine how this would soften the measure in practice, since almost all news aggregation services are commercially supported.
The meeting of the Industry, Research and Energy (ITRE) Committee held in the same week didn't go much better than that of the CULT Committee. The good news, if we can call it that, is that they softened the upload filtering proposal a little. The ITRE language no longer explicitly refers to content recognition technologies as a measure to be agreed between copyright holders and platforms that host "significant amounts" (the Commission proposal had said "large amounts") of copyright protected works uploaded by users. On the other hand, such measures aren't ruled out, either; so the change is a minor one at best.
There is no similar saving grace in the ITRE's treatment of the link tax. Oddly for a committee dedicated to research, it proposed amendments to the link tax that would make life considerably harder for researchers, by extending the tax to become payable not only on snippets from news publications but also those taken from academic journals, and whether those publications are online or offline. The extension of the link tax to journals came by way of a single word amendment to recital 33 [PDF]:
Periodical publications which are published for scientific or academic purposes, such as scientific journals, should n̶o̶t̶ also be covered by the protection granted to press publications under this Directive.
This deceptively small change would open up a whole new class of works for which publishers could demand payment for the use of small snippets, apparently including works that the author had released under an open access license (since it's the publisher, not the author, that is the beneficiary of the new link tax).
The JURI Committee also met during the week, although it did not vote on any amendments. Even so, the statements and discussions of the participants at this meeting are just as important as the votes of the other committees, given JURI's leadership of the dossier. The meeting (a recording of which is available online) was chaired by German MEP Axel Voss, who has recently replaced the previous chair Theresa Comodini as rapporteur. Whereas MEP Comodini's report for the committee had been praised for its balance, Voss has taken a much more hardline approach. Addressing him as Chair, Pirate Party MEP Julia Reda stated during the meeting:
I have never seen a Directive proposal from the Commission that has been met with such unanimous criticism from academia. Europe's leading IP law faculties have stated in an open letter, and I quote, "There is independent scientific consensus that Articles 11 and 13 cannot be allowed to stand," and that the proposal for a neighboring right is "unnecessary, undesirable, and unlikely to achieve anything other than adding to complexity and cost".
The developments in the CULT, ITRE and JURI committees last week were disappointing, but they do not determine the outcome of this battle. More decisive will be the votes of the Civil Liberties, Justice and Home Affairs (LIBE) Committee in September, followed by negotiations around the principal report in the JURI Committee and its final vote on October 10. Either way, by year's end we will know whether European politicians have been utterly captured by their powerful publishing lobby, or whether the European Parliament still effectively represents the voices of ordinary European citizens.
In a disappointing opinion issued on Monday, the Ninth Circuit upheld the national security letter (NSL) statute against a First Amendment challenge brought by EFF on behalf of our clients CREDO Mobile and Cloudflare. We applaud our clients’ courage as part of a years-long court battle, conducted largely under seal and in secret.
We strongly disagree with the opinion and are weighing how to proceed in the case. Even though this ruling is disappointing, together EFF and our clients achieved a great deal over the past six years. The lawsuit spurred Congress to amend the law, and our advocacy related to the case caused leading tech companies to also challenge NSLs. Along the way, the government went from fighting to keep every single NSL gag order in place to the point where many have been lifted, some in whole and many in part. That includes this case, of course, where we can now proudly tell the names of our clients to the world.
No matter what happens with these particular lawsuits, we are not done fighting unconstitutional use of NSLs and similar laws.Making sense of a disappointing ruling
National security letters are a kind of subpoena issued by the FBI to communications service providers like our clients to force them to turn over customer records. NSLs nearly always contain gag orders preventing recipients from telling anyone about these surveillance requests, all without any mandatory court oversight. As a result, the Internet and communications companies that we all trust with our most sensitive information cannot be truthful with their customers and the public about the scope of government surveillance.
NSL gags are perfect examples of “prior restraints,” government orders prohibiting speech rather than punishing it after the fact. The First Amendment embodies the Founders’ strong distrust of prior restraints as powerful censorship tools, and the Supreme Court has repeatedly said they are presumptively unconstitutional unless they meet the “most exacting” judicial scrutiny. Similarly, because NSLs prevent recipients from talking about the FBI’s request for customer data, they are content-based restrictions on speech, which are subject to strict scrutiny. So NSL gags ought to be put to the strictest of First Amendment tests.
Unfortunately, the Ninth Circuit questioned whether NSLs are prior restraints at all. And although the court did acknowledge they are separately content-based restrictions on speech, it said the law is narrowly tailored even though it plainly allows censorship that is broader in scope and longer in duration than the government actually needs. As a result, the court held the government’s interest in national security overcomes any First Amendment interests at stake.
The ruling is seriously flawed.Not-so-narrow tailoring
In order to find that the law satisfied strict scrutiny, the court overlooked both the overinclusiveness and indefinite duration of NSL gag orders. Narrow tailoring requires that a restriction on speech be fitted carefully to just what the government needs to protect its investigation and that no less speech-restrictive alternatives are available.
But NSLs are often wildly overinclusive. For example, they prevent even a company with millions of users like Cloudflare from simply saying it has received an NSL, on the theory that individual users engaged in terrorism or espionage might somehow infer from that fact alone that the government is on their trail.
The court admitted that a blanket gag in this scenario might well be overinclusive, but it simply deferred to the FBI’s decisionmaking. But of course, under the First Amendment, decisions about censorship aren’t supposed to be left to officials whose "business is to censor.” And here, we know that NSLs routinely issue to big tech companies with large numbers of users like both Cloudflare and CREDO, and only in rare circumstances does the FBI allow these companies to report on specific NSLs they’ve received.
Similarly, the FBI often leaves NSL gags in place indefinitely, sometimes even permanently. Indeed, the FBI has told our client CREDO that one of the NSLs in the case is now permanent, and the Bureau will not further revisit the gag it imposed to determine whether it still serves national security. Here again, the court acknowledged that at the least, narrow tailoring requires a gag “must terminate when it no longer serves” the government’s national security interests. But instead of applying the First Amendment’s narrow tailoring requirement, the court declined to “quibble” with the censoring agency, the FBI, and its loophole-ridden internal procedures for reviewing NSLs. Nevertheless, these procedures “do not resolve the duration issue entirely,” as the Ninth Circuit understatedly put it, since they may still produce permanent gags, as with CREDO. As a result, the court suggested that NSL recipients can repeatedly challenge permanent gags until they’re finally lifted.The problem of prior restraints and judicial review
However, that points to the other fundamental problem with NSLs: they are issued without any mandatory court oversight. As discussed above, prior restraints are almost never constitutional. The Supreme Court has said that even in the rare circumstance when prior restraints can be justified, they must be approved by a neutral court, not just an executive official. But the NSL statute doesn’t require a court to be involved in all cases; instead, judicial review takes place only if NSL recipients file a lawsuit, like our clients did, or if they ask the government to go to court to review the gag using a procedure known as “reciprocal notice.”
The Ninth Circuit had two responses to this lack of judicial oversight.
First, it wrongly suggested the law of prior restraints simply does not apply here. The theory is that unlike cases involving newspapers that are prevented from publishing, NSL recipients haven’t shown a preexisting desire to speak, and when they do, they’re asking to publish information they supposedly learned from the government. But as we pointed out, that’s inconsistent with case law that says, for instance, that witnesses at grand jury proceedings—which are historically both secret and subject to court oversight—cannot be indefinitely gagged from talking about their own testimony. NSL gags go much further.
Second, the court suggested that even though the burden is on NSL recipients to challenge gags, this is a “de minimis” burden that doesn’t violate the First Amendment. When Congress passed the USA FREEDOM Act in 2015, it gave recipients the option of invoking reciprocal notice and asking the government to go to court rather than filing their own lawsuit. That’s simply not good enough; the First Amendment requires the government be the one to go to court to prove to a judge it actually requires an NSL accompanied by a gag. Not to mention that forcing companies that receive NSLs to fight them in court and defend user privacy may actually be a heavy burden.Big progress nonetheless
Despite these considerable errors in the Ninth Circuit’s opinion, we shouldn’t lose sight of progress made along the way. Nearly all of the features of the NSL statute that the court pointed to as saving graces of the law—the FBI’s internal review procedures and the option for reciprocal notice most notably—exist only because Congress stepped in during our lawsuit to amend the law.
So what’s left to providers that receive NSLs? Push back on the gags early and often. The “reciprocal notice” process, which the government says only requires a short letter or a phone call, should be done as a matter of course for any company receiving an NSL. And since the Ninth Circuit said that courts retain the ability to re-evaluate the gags as long as they remain in place, gagged providers should ask a court to step in and make sure the FBI can still prove the need for the gag—potentially over and over—until the gag is finally lifted. EFF wants to help with this, and we’re happy to consult with anyone subject to an NSL gag.
We’ve also encouraged technology companies to make the best of the reciprocal notice procedure as part of our annual Who Has Your Back? report. If the government continues to argue that recipients don’t necessarily “want to speak” about NSLs, we can now point to the growing trend of major tech companies—Apple, Adobe, and Dropbox, among others—that have committed to invoking reciprocal notice and challenging every NSL they receive.
Finally, we’ve seen other courts question gag orders in related contexts, and we’ve supported companies like Facebook and Microsoft in these fights. We’re confident that in the long run, these prior restraints will be roundly rejected yet again.Related Cases: National Security Letters (NSLs)In re: National Security Letter 2011 (11-2173)In re National Security Letter 2013 (13-80089)In re National Security Letter 2013 (13-1165)
Imagine trying to do online research on breast cancer, or William S. Burroughs’ famous novel Naked Lunch, only to find that your search results keep coming up blank. This is the confounding situation that faced Microsoft Bing users in the Middle East and North Africa for years, made especially confusing by the fact that if you tried the same searches on Google, it did offer results for these terms.
Problems caused by the voluntary blocking of certain terms by intermediaries are well-known; just last week, we wrote about how payment processors like Venmo are blocking payments from users who describe the payments using certain terms—like Isis, a common first name and name of a heavy metal band, in addition to its usage as an acronym for the Islamic State. Such keyword-based filtering algorithms will inevitably results in overblocking and false positives because of their disregard for the context in which the words are used.
Search engines also engage in this type of censorship—in 2010, I co-authored a paper [PDF] documenting how Microsoft Bing (brand new at the time) engaged in filtering of sex-related terms in the Middle East and North Africa, China, India, and several other locations by not allowing users to turn off “safe search”. Despite the paper and various advocacy efforts over the years, Microsoft refused to budge on this—until recently.
At RightsCon this year, I led a panel discussion about the censorship of sexuality online, covering a variety of topics from Facebook’s prudish ideas about the female body to the UK’s restrictions on “non-conventional” sex acts in pornography to Iceland’s various attempts to ban online pornography. During the panel, I also raised the issue of Microsoft’s long-term ban on sexual search terms in the Middle East, noting specifically that the company’s blanket ban on the entire region seemed more a result of bad market research than government interference, based on the fact that a majority of countries in the MENA region do not block pornography, let alone other sexual content.
Surprisingly, not long after the conference, I did a routine check of Bing and was pleased to discover that “Middle East” had disappeared from the search engine’s location settings, replaced with “Saudi Arabia.” The search terms are still restricted in Saudi Arabia (likely at the request of the government), but users in other countries across the diverse region are no longer subject to Microsoft’s safe search. Coincidence? It's hard to say; just as we didn't know Microsoft's motivations for blacklisting sexual terms to begin with, it was no more transparent about its change of heart.
Standing up against this kind of overbroad private censorship is important—companies shouldn’t be making decisions based on assumptions about a given market, and without transparency and accountability. Decisions to restrict content for a particular reason should be made only when legally required, and with the highest degree of transparency possible. We commend Microsoft for rectifying their error, and would like to see them continue to make their search filtering policies and practices more open and transparent.
Today, a group of over 190 Internet engineers, pioneers, and technologists filed comments with the Federal Communications Commission explaining that the FCC’s plan to roll back net neutrality protections is based on a fundamentally flawed and outdated understanding of how the Internet works.
Signers include current and former members of the Internet Engineering Task Force and Internet Corporation for Assigned Names and Numbers' committees, professors, CTOs, network security engineers, Internet architects, systems administrators and network engineers, and even one of the inventors of the Internet’s core communications protocol.
This isn’t the first time many of these engineers have spoken out on the need for open Internet protections. In 2015, when the EFF and ACLU filed a friend-of-the-court brief defending the net neutrality rules, dozens of engineers signed onto a statement supporting the technical justifications for the Open Internet Order.
The engineers’ statement filed today contains facts about the structure, history, and evolving nature of the Internet; corrects technical errors in the proposal; and gives concrete examples of the harm that will be done should the proposal be accepted.
The engineers explain that:
"Based on certain questions the FCC asks in the Notice of Proposed Rulemaking (NPRM), we are concerned that the FCC (or at least Chairman Pai and the authors of the NPRM) appears to lack a fundamental understanding of what the Internet's technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services."
The engineers point to specific errors in the NPRM. As one example among many: the NPRM tries to argue that ISPs, not edge providers, are the main drivers for services such as streaming movies, sharing photos, posting on social media, automatic translation, and so on. The NPRM also erroneously assumes that transforming an IP packet from IPv4 to IPv6 somehow changes the form of the payload.
The engineers explain how the Internet (and in particular broadband) has changed since 2002, when the FCC first explicitly classified broadband internet access service as an information service, and why that classification is no longer appropriate in light of technical developments. Drawing on this background information, they then respond to specific questions from the NPRM in order to correct the FCC's mistakes.
The statement provides nearly a dozen different examples of consumer harm that could have been prevented by the light-touch, bright-line rules—like when AT&T distorted the market for content by using its gatekeeping power to not charge its customers for its DIRECTV video service while charging third-parties more to similarly zero-rate data. It also gives several examples of consumer benefits that happened as a result of the 2015 Open Internet Order, like mobile service providers finally removing the prohibition that was stopping customers from tethering their personal computers to their mobile devices in order to use their mobile broadband connections.
The NPRM fundamentally misunderstands the basic technology underlying how the Internet works. If the FCC were to move forward with its NPRM as proposed, the results could be disastrous: the FCC would be making a major regulatory decision based on plainly incorrect assumptions about the underlying technology and Internet ecosystem that will have a disastrous effect on innovation in the Internet ecosystem as a whole.
EFF to FCC: Tossing Net Neutrality Protections Will Set ISPs Free to Throttle, Block, and Censor the Internet for Users
Washington, D.C.—The Electronic Frontier Foundation (EFF) urged the FCC to keep in place net neutrality rules, which are essential to prevent cable companies like Comcast and Verizon from controlling, censoring, and discriminating against their subscribers’ favorite Internet content.
In comments submitted today, EFF came out strongly in opposition to the FCC’s plan to reverse the agency’s 2015 open Internet rules, which were designed to guarantee that service providers treat everyone’s content equally. The reversal would send a clear signal that those providers can engage in data discrimination, such as blocking websites, slowing down Internet speeds for certain content—known as throttling—and charging subscribers fees to access movies, social media, and other entertainment content over “fast lanes.” Comcast, Verizon, and AT&T supply Internet service to millions of Americans, many of whom have no other alternatives for high-speed access. Given the lack of competition, the potential for abuse is very real.
EFF’s comments join those of many other user advocates, leading computer engineers, entrepreneurs, faith communities, libraries, educators, tech giants, and start-ups that are fighting for a free and open Internet. Last week those players gave the Internet a taste of what a world without net neutrality would look like by temporarily blocking and throttling their content. Such scenarios aren’t merely possible—they are likely, EFF said in its comments. Internet service providers (ISPs) have already demonstrated that they are willing to discriminate against competitors and block content for their own benefit, while harming the Internet experience of users.
“ISPs have incentives to shape Internet traffic and the FCC knows full well of instances where consumers have been harmed. AT&T blocked data sent by Apple’s FaceTime software, Comcast has interfered with Internet traffic generated by certain applications, and ISPs have rerouted users’ web searches to websites they didn’t request or expect,” said EFF Senior Staff Attorney Mitch Stoltz. “These are just some examples of ISPs controlling our Internet experience. Users pay them to connect to the Internet, not decide for them what they can see and do there.”
Nearly 200 computer scientists, network engineers, and Internet professionals also submitted comments today highlighting deep flaws in the FCC’s technical description of how the Internet works. The FCC is attempting to pass off its incorrect technical analysis to justify its plan to reclassify ISPs so they are not subject to net neutrality rules. The engineers’ submission—signed by such experts as Vint Cerf, co-designer of the Internet’s fundamental protocols; Mitch Kapor, a personal computer industry pioneer and EFF co-founder; and programmer Sarah Allen, who led the team that created Flash video—sets the record straight about how the Internet works and how rolling back net neutrality would have disastrous effects on Internet innovation.
“We are concerned that the FCC (or at least Chairman Pai and the authors of the Notice of Proposed Rulemaking) appears to lack a fundamental understanding of what the Internet’s technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services,” the letter said.
“It is clear to us that if the FCC were to reclassify broadband access service providers as information services, and thereby put the bright-line, light-touch rules from the Open Internet Order in jeopardy, the result could be a disastrous decrease in the overall value of the Internet.”
For EFF’s comments:
For the engineers’ letter:
For more about EFF’s campaign to keep net neutrality:
The United States Trade Representative (USTR) has just released its trade negotiating objectives [PDF] for a revision of NAFTA, the North American Free Trade Agreement between the United States, Mexico, and Canada. NAFTA is expected to open up a new front in big content's neverending battle for stricter copyright rules, following the unexpected defeat of the Trans-Pacific Partnership (TPP). Meanwhile, big tech companies are now wielding increasing influence with the USTR, and demanding that it negotiate rules that protect their businesses also, such as prohibitions against restrictions on the cross-border transfer of data.
In EFF's comments to the USTR about what its negotiating objectives should be, we urged it not to include new copyright rules in NAFTA, because of how this would prevent the United States from improving its current law or adapting to technological change. We also expressed the need for caution about including some of the new digital trade (or e-commerce) rules that big tech companies have been asking for, for similar reasons, and because the trade negotiation process notoriously lacks the balance that would be required for it to negotiate a sound set of rules.Copyright Rules
The negotiating objectives are hopelessly general, but it seems that our requests largely fell on deaf ears. The negotiating objectives on intellectual property relevantly include to:
- Ensure provisions governing intellectual property rights reflect a standard of protection similar to that found in U.S. law.
- Provide strong protection and enforcement for new and emerging technologies and new methods of transmitting and distributing products embodying intellectual property, including in a manner that facilitates legitimate digital trade. ...
- Ensure standards of protection and enforcement that keep pace with technological developments, and in particular ensure that rightholders have the legal and technological means to control the use of their works through the Internet and other global communication media, and to prevent the unauthorized use of their works.
- Provide strong standards [of, sic] enforcement of intellectual property rights, including by requiring accessible, expeditious, and effective civil, administrative, and criminal enforcement mechanisms.
These provisions are consistent with the U.S. demanding similar provisions to those that had been contained in the TPP, including life plus 70 year terms of copyright protection, criminal penalties for "commercial scale" copyright infringement, and legal protections for DRM—all of which would be new to NAFTA. Disappointingly, there is no reference to be found to the inclusion of a "fair use" exception to copyright, as we had requested in our submission.Digital Trade (E-Commerce) Rules
As for digital trade, the objectives include to:
- Ensure non-discriminatory treatment of digital products transmitted electronically and guarantee that these products will not face government-sanctioned discrimination based on the nationality or territory in which the product is produced.
- Establish rules to ensure that NAFTA countries do not impose measures that restrict crossborder data flows and do not require the use or installation of local computing facilities.
- Establish rules to prevent governments from mandating the disclosure of computer source code.
While some of these rules might not be harmful, if they were drafted in an adequately open and consultative fashion, we have previously expressed concerns that the ban on restrictions on crossborder data flows may not allow countries adequate policy space to protect the privacy of users' data. We are also worried about the possibility that a blanket ban on laws requiring the disclosure of source code could limit countries from introducing new measures to protect users from vulnerabilities in digital products such as routers and Internet of Things (IoT) devices.Our New Infographic Makes Sense of It All
You might well be wondering how the new version of NAFTA will compare with other digital trade negotiations, such as the TPP (which could still rise again between the other eleven countries besides the United States), and the Regional Comprehensive Economic Partnership (RCEP, whose negotiators are meeting this week in Hyderabad, India). To help explain, we've put together this infographic which illustrates five of the major ongoing trade agreements that are likely to contain provisions on digital issues. It provides a quick overview of their current status, the countries involved, and the issues that they contain.
One thing that all of these agreements have in common is that there is no easy way for users to access them. Negotiation rounds take place in far-flung cities of the world, with little or sometimes no notice to the general public, and next to no transparency about the texts under discussion, and with little or no official means of access to the negotiators for public interest advocates such as EFF. Nevertheless, EFF is on the ground in Hyderabad this week to stand up for users, and we plan to do the same in the coming NAFTA negotiations too.
Despite today's release of the USTR's negotiating objectives for NAFTA, they are nowhere near detailed enough for us to know what rules the USTR will really be asking for from our partners. And that's dangerous, because we don't really know what we're fighting against, and whether our fears are justified or overblown. Worse, we might never know until the agreement is concluded—unless it is leaked in the meantime. That's just not acceptable, and it needs to change.
Keep reading Deeplinks for updates on the progress of each of these trade agreements, and how they will affect you. And if you'd like to support our difficult work in fighting for users' rights in all of these secretive venues, you can help by donating to EFF.
Border agents may not use travelers’ laptops, phones, and other digital devices to access and search cloud content, according to a new document by U.S. Customs and Border Protection (CBP). CBP wrote this document on June 20, 2017, in response to questions from Sen. Wyden (D-OR). NBC published it on July 12. It states:
In conducting a border search, CBP does not access information found only on remote servers through an electronic device presented for examination, regardless of whether those servers are located abroad or domestically. Instead, border searches of electronic devices apply to information that is physically resident on the device during a CBP inspection.
This is a most welcome change from prior CBP policy and practice. CBP’s 2009 policy on border searches of digital devices does not prohibit border agents from using those devices to search travelers’ cloud content. In fact, that policy authorizes agents to search “information encountered at the border,” which logically would include cloud content encountered by searching a device at the border.
We do know that border agents have used travelers’ devices to search their cloud content. Many news reports describe border agents scrutinizing social media and communications apps on travelers’ phones, which show agents conducting cloud searches.
EFF will monitor whether actual CBP practice lives up to this salutary new policy. To help ensure that border agents follow it, CBP should publish it. So far, the public only has second-hand information about this “nationwide muster” (the term CBP’s June 17 document uses to describe this new CBP written policy on searching cloud data). Also, CBP should stop seeking social media handles from foreign visitors, which blurs CBP’s new instruction to border agents that cloud searches are off limits.
Separately, CBP’s responses to Sen. Wyden’s questions explain what will happen to a U.S. citizen who refuses to comply with a border agent’s demand to disclose their device password (or unlock their device) in order to allow the agent to search their device:
[A]lthough CBP may detain an arriving traveler’s electronic device for further examination, in the limited circumstances when that is appropriate, CBP will not prevent a traveler who is confirmed to be a U.S. citizen from entering the country because of a need to conduct that additional examination.
This is what EFF told travelers would happen in our March 2017 border guide, based on law and reported CBP practice. It is helpful that CBP has confirmed this in writing. However, CBP also should publicly state whether U.S. lawful permanent residents (green card holders) will be denied entry for not facilitating a CBP search of their devices. They should not be denied entry. Notably, Sen. Wyden asked CBP to answer this question about all “U.S. persons,” and not just U.S. citizens.
CBP’s responses leave other important questions unanswered. For example, CBP should publicly state whether, when border agents ask travelers for their device passwords, the agents must (in the words of Sen. Wyden) “first inform the traveler that he or she has the right to refuse.” CBP did not answer this question. The international border is an inherently coercive environment, where harried travelers must seek permission to come home from uniformed and frequently armed agents in an unfamiliar space. To ensure that agents do not strong-arm travelers into surrendering their digital privacy, agents should be required to inform travelers that they may choose not to unlock their devices.
Also, CBP should publicly answer Sen. Wyden’s question about how many times in the last five years CBP has searched a device “at the request of another government agency.” Such searches will usually be improper. Historically, courts have granted border agents greater search powers than other law enforcement officials, but only for purposes of enforcing customs and immigration laws. If border agents search travelers at the request of other agencies, they presumably do so for others purposes, and so use of their heightened powers is improper. While CBP’s document provides information about CBP’s assistance requests to other agencies (for example, to seek technical help with decryption), this sheds no light on other agencies’ requests to CBP to use a traveler’s presence at the border as an excuse to conduct a warrantless search, which likely would not be justified at the interior of the country.
EFF applauds Sen. Wyden for his leadership in congressional oversight of CBP’s border device searches. We also thank CBP for answering some of Sen. Wyden’s questions. But many questions remain.
CBP’s June 2017 responses confirm that much more must be done to protect travelers’ digital privacy at the U.S. border. An excellent first step would be to enact Sen. Wyden’s bipartisan bill to require border agents to get a warrant before searching the digital devices of U.S. persons.
A Minnesota sheriff’s office must release emails showing how it uses biometric technology so that the community can understand how invasive it is, EFF argued in a brief filed in the Minnesota Supreme Court on Friday.
The case, Webster v. Hennepin County, concerns a particularly egregious failure to respond to a public records request that an individual filed as part of a 2015 EFF and MuckRock campaign to track biometric technology use by law enforcement across the country.
EFF has filed two briefs in support of web engineer and public records researcher Tony Webster’s request, with the latest brief [.pdf] arguing that agencies must provide information contained in emails to help the public understand how a local sheriff uses biometric technology. The ACLU of Minnesota joined EFF on the brief.
As we write in the brief:
This case is not about whether or how the government may collect biometric data and develop and domestically deploy information-retrieval technology as a potential sword against the general public. That is just one debate we must have, but critical to it and all public debates is that it be informed by public [records]
The case began when Webster filed a request based on EFF’s letter template with Hennepin County, a jurisdiction that includes Minneapolis, host city of the 2018 Super Bowl. He sought emails, contracts, and other records related to the use of technology that can scan and recognize fingerprints, faces, irises, and other forms of biometrics.
After the county basically ignored the request, Webster sued. An administrative law judge ruled in 2015 that the county had violated the state’s public records law both because it failed to provide documents to Webster and because it did not have systems in place to quickly search and disclose electronic records.
An intermediate appellate court ruled in 2016 that the county had to turn over the records Webster sought, but it reversed the lower court’s ruling that the county did not have adequate procedures in place to respond to public records requests.
Both Webster and the county appealed the ruling to the Minnesota Supreme Court. In its appeal, the county argues that public records requesters create undue burden on agencies when they specify that they search for particular key words or search terms.
EFF’s brief in support of Webster points out the flaws in the county’s search term argument. Having requesters identify specific search terms for documents they seek helps agencies conduct better searches for records while narrowing the scope of the request. This ultimately reduces the burden on agencies and leads to records being released more quickly.
EFF would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.
Australian PM Calls for End-to-End Encryption Ban, Says the Laws of Mathematics Don't Apply Down Under
"The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia", said Australian Prime Minister Malcolm Turnbull today. He has been rightly mocked for this nonsense claim, that foreshadows moves to require online messaging providers to provide law enforcement with back door access to encrypted messages. He explained that "We need to ensure that the internet is not used as a dark place for bad people to hide their criminal activities from the law." It bears repeating that Australia is part of the secretive spying and information sharing Five Eyes alliance.
But despite the well-deserved mockery that ensued, we shouldn't make too much light of the real risk that this poses to Internet freedom in Australia. It's true enough, for now, that a ban on end-to-end encrypted messaging in Australia would have absolutely no effect on "bad people", who would simply avoid using major platforms with weaker forms of encryption, in favor of other apps that use strong end-to-end encryption based on industry standard mathematical algorithms. It would hurt ordinary citizens who rely on encryption to make sure that their conversations are secure and private from prying eyes.
However, as similar demands are made elsewhere around the world, more and more app developers might fall under national laws that require them to compromise their encryption standards. Users of those apps, who may have a network of contacts who use the same app, might hesitate to shift to another app that those contacts don't use, even if it would be more secure. They might also worry that using end-to-end encryption would be breaking the law (a concern that "bad people" tend to be far less troubled by). This will put those users at risk.
If enough countries go down the same misguided path, that sees Australia following in the steps of Russia and the United Kingdom, the future could be a new international agreement banning strong encryption. Indeed, the Prime Minister's statement is explicit that this is exactly what he would like to see. It may seem like an unlikely prospect for now, with strong statements at the United Nations level in support of end-to-end encryption, but we truly can't know what the future will bring. What seems like a global accord today might very well start to crumble as more and more countries defect from it.
We can't rely on politicians to protect our privacy, but thankfully we can rely on math ("maths", as Australians say). That's what makes access to strong encryption so important, and Australia's move today so worrying. Law enforcement should have the tools they need to investigate crimes, but that cannot extend to a ban on the use of mathematical algorithms in software. Mr Turnbull has to understand that we either have an internet that "bad people" can use, or we don't have an Internet. It's actually as simple as that.
Broadband privacy? Say what? That was probably what you were asking yourself in March when you read about Congress’s vote to repeal privacy rules for your Internet provider. If you were paying attention—and you should in an era where free press, voter privacy, and other constitutional rights are being challenged—you quickly realized that what Congress did. It sold out your right to keep your browsing history and personal information private so the cable companies can sell it and make even more money off of you than they already do. Nice, right?
Luckily, many states, including California, have stepped up to the plate for you. They have introduced bills that give back to you the right to control how your private information is used by the companies that control the Internet pipeline into your home. In California, lawmakers in Sacramento are considering a bill that would reinstate those privacy rules, requiring Internet providers to get your permission before they can profit off of your personal information.
California has always led the country on many fronts: the environment, civil liberties, to name a few. It’s time for us to lead now. California’s top media organizations have gotten behind this legislation, AB 375, introduced by Assemblyman Ed Chau, a Democrat from Monterey Park.
If you care about your online privacy, you should, too. Here’s what the editorial boards of the state’s leading newspapers have to say:
AT&T, Comcast and other Internet service providers can continue to track every search you make and website you visit and sell that information to the highest bidder, under legislation recently signed by President Donald Trump.
That legislation, which reversed an Obama regulation, ought to alarm any American who ventures online, no matter their political persuasion. Now comes Assemblyman Ed Chau, a Democrat from Monterey Park, carrying a bill that for Californians would reverse the legislation and provide some privacy at a time when seemingly nothing is private.
Assembly Bill 375 would require Internet service providers to have customers “opt in” before they are allowed to sell information on their online searches and visits. Here’s hoping state lawmakers realize the value of having such a law and reject the telecom companies’ claim that it is “unfair” to not let them capitalize on the sort of information that Facebook and Google accumulate about their users.
The difference, of course, is that people pay heavily for Internet service because in the modern era, it is akin to a must-have utility. Facebook and Google are free. It is absurd that consumers paying companies for a service should be expected to accept that the price paid includes a gross loss of privacy.
California is uniquely able to take a strong stand in favor of consumer privacy. If the digital age has a technological and corporate center, it is here. We’re also large enough to make a difference nationally.
California has an obligation to take a lead in establishing the basic privacy rights of consumers using the Internet. Beyond being the right thing to do for the whole country, building trust in tech products is an essential long-term business strategy for the industry that was born in this region. California Assemblyman Ed Chau, D-Monterey Park, understands this. After Congressional Republicans erased Americans’ Internet broadband privacy protections in March, Chau crafted AB 375 to at least provide these rights to Californians.
If you happen to be a fan of the heavy metal band Isis (an unfortunate name, to be sure), you may have trouble ordering its merchandise online. Last year, Paypal suspended a fan who ordered an Isis t-shirt, presumably on the false assumption that there was some association between the heavy metal band and the terrorist group ISIS.
Then last month Internet scholar and activist Sascha Meinrath discovered that entering words such as "ISIS" (or "Isis"), or "Iran", or (probably) other words from this U.S. government blacklist in the description field for a Venmo payment will result in an automatic block on that payment, requiring you to complete a pile of paperwork if you want to see your money again. This is even if the full description field is something like "Isis heavy metal album" or "Iran kofta kebabs, yum."
These examples may seem trivial, but they reveal a more serious problem with the trust and responsibility that the Internet places in private payment intermediaries. Since even many non-commercial websites such as EFF's depend on such intermediaries to process payments, subscription fees, or donations, it's no exaggeration to say that payment processors form an important part of the financial infrastructure of today's Internet. As such, they ought to carry corresponding responsibilities to act fairly and openly towards their customers.
Unfortunately, given their reliance on bots, algorithms, handshake deals, and undocumented policies and blacklists to control what we do online, payment intermediaries aren't carrying out this responsibility very well. Given that these private actors are taking on responsibilities to help address important global problems such as terrorism and child online protection, the lack of transparency and accountability with which they execute these weighty responsibilities is a matter of concern.
The readiness of payment intermediaries to do deals on those important issues leads as a matter of course to their enlistment by governments and special interest groups to do similar deals on narrower issues, such as the protection of the financial interests of big pharma, big tobacco, and big content. It is in this way that payment intermediaries have insidiously become a weak leak for censorship of free speech.Cigarettes, Sex, Drugs, and Copyright
For example, if you're a smoker, and you try to buy tobacco products from a U.S. online seller using a credit card, you'll probably find that you can't. It's not illegal to do so, but thanks to a "voluntary" agreement with law enforcement authorities dating back to 2005, payment processors have effectively banned the practice—without any law or court judgment.
Another example that we've previously written about are the payment processors' arbitrary rules blocking sites that discuss sexual fetishes, even though that speech is constitutionally protected. The congruence between the payment intermediaries' terms of service on the issue suggests a degree of coordination between them, but their lack of transparency makes it impossible to be sure who was behind the ban and what channels they used to achieve it.
A third example is the ban on pharmaceutical sales. You can still buy pharmaceuticals online using a credit card, but these tend to be from unregulated, rogue pharmacies that lie to the credit card processors about the purpose for which their merchant account will be used. For the safer, regulated pharmacies that require a prescription for the drugs they sell online, such as members of the Canadian International Pharmacy Association (CIPA), the credit card processors enforce a blanket ban.
Finally there are "voluntary" best practices on copyright and trademark infringement. These include the RogueBlock program of the International Anti-Counterfeiting Coalition (IACC) in 2012, about which information is available online, along with a 2011 set of "Best Practices to Address Copyright Infringement and the Sales of Counterfeit Products on the Internet," about which no online information is found. The only way that you can find out about the standards that payment intermediaries use to block websites accused of copyright or trademark infringement is by reading what academics have written about it.Lack of Transparency Invites Abuse
The payment processors might respond that their terms of service are available online, which is true. However, these are ambiguous at best. On Venmo, transactions for items that promote hate, violence, or racial intolerance are banned, but there is nothing in its terms of service to indicate that including the name of a heavy metal band in your transaction will place it in limbo. Similarly, if you delve deep enough into Paypal's terms of service you will find out that selling tickets to professional UK football matches is banned, but you won't find out how this restriction came about, or who had a say in it.
Payment processors can do better. In 2012, in the wake of the payment industry's embargo of Wikileaks and its refusal to process payments to European vendors of horror films and sex toys, the European Parliament Committee on Economic and Monetary Affairs made the following resolution:
[The Committee c]onsiders it likely that there will be a growing number of European companies whose activities are effectively dependent on being able to accept payments by card; [and] considers it to be in the public interest to define objective rules describing the circumstances and procedures under which card payment schemes may unilaterally refuse acceptance.
We agree. Bitcoin and other cryptocurrencies notwithstanding, online payment processing remains largely oligopolistic. Agreements between the few payment processors that make up the industry and powerful commercial lobbies and governments, concluded in the shadows, can have deep impacts on entire online communities. When payment processors are drawing their terms of service or developing algorithms that are based on industry-wide agreements, standards, or codes of conduct—especially if these involve governments or other third parties—they ought to be developed through a process that is inclusive, balanced and accountable.
The fact that you can't use Venmo to purchase an Isis t-shirt is just one amusing example. But the Shadow Regulation of the payment services industry is much more serious than that, also affecting culture, healthcare, and even your sex life online. Just as we've called other Internet intermediaries to account for the ways in which their "voluntary" efforts threaten free speech, the online payment services industry needs to be held to the same standard.
Yesterday's record-smashing Net Neutrality day of action showed that the Internet's users care about an open playing field and don't want a handful of companies to decide what we can and can't do online.
Today, we should also think about other ways in which small numbers of companies, including net neutrality's biggest foes, are trying to gain the same kinds of control, with the same grave consequences for the open web. Exhibit A is baking digital rights management (DRM) into the web's standards.
ISPs that oppose effective net neutrality protections say that they've got the right to earn as much money as they can from their networks, and if people don't like it, they can just get their internet somewhere else. But of course, the lack of competition in network service means that most people can't do this.
Big entertainment companies -- some of whom are owned by big ISPs! -- say that because they can make more money if they can control your computer and get it to disobey you, they should be able to team up with browser vendors and standards bodies to make that a reality. If you don't like it, you can watch someone else's movies.
Like ISPs, entertainment companies think they can get away with this because they too have a kind of monopoly --copyright, which gives rightsholders the power to control many uses of their creative works. But just like the current FCC Title II rules that stop ISPs from flexing their muscle to the detriment of web users, copyright law places limits on the powers of copyright holders.
Copyright can stop you from starting a business to sell unlicensed copies of the studios' movies, but it couldn't stop Netflix from starting a business that mailed DVDs around for money; it couldn't stop Apple from selling you a computer that would "Rip, Mix, Burn" your copyrighted music, and it couldn't stop cable companies from starting businesses that retransmitted broadcasters' signals.
That competitive balance makes an important distinction between "breaking the law" (not allowed) and "rocking the boat" (totally allowed). Companies that want to rock the boat are allowed to enter the market with new, competitive offerings that go places the existing industry fears to tread, and so they discover new, unmapped and fertile territory for services and products that we come to love and depend on.
But overbroad and badly written laws like Section 1201 of the 1998 Digital Millennium Copyright Act (DMCA) upset this balance. DMCA 1201 bans tampering with DRM, even if you're only doing so to exercise the rights that Congress gave you as a user of copyrighted works. This means that media companies that bake DRM into the standards of the web get to decide what kinds of new products and services are allowed to enter the market, effectively banning others from adding new features to our media, even when those features have been declared legal by Congress.
ISPs are only profitable because there was an open Internet where new services could pop up, transforming the Internet from a technological curiosity into a necessity of life that hundreds of millions of Americans pay for. Now that the ISPs get steady revenue from our use of the net, they want network discrimination, which, like the discrimination used by DRM advocates, is an attempt to change "don't break the law" into "don't rock the boat" -- to force would-be competitors to play by the rules set by the cozy old guard.
For decades, activists struggled to get people to care about net neutrality, and their opponents from big telecom companies said, "people don't care, all they want is to get online, and that's what we give them." The once-quiet voices of net neutrality wonks have swelled into a chorus of people who realize that an open web was important to their future. As we saw yesterday, the public broadly demands protection for the open Internet.
Today, advocates for DRM say that "People don't care, all they want is to watch movies, and that's what we deliver." But there is an increasing realization that letting major movie studios tilt the playing field toward them and their preferred partners also endangers the web's future.
Don't take our word for it: last April, Professor Tim Wu, who coined the term "net neutrality" and is one of the world's foremost advocates for a neutral web, published an open letter to Tim Berners-Lee, inventor of the web and Director of the World Wide Web Consortium (W3C), where there is an ongoing effort to standardize DRM for the web.
In that letter, Wu wrote:
I think more thinking need be done about EME’s potential consequences for competition, both as between browsers, the major applications, and in ways unexpected. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know. That’s the principal concern of net neutrality, and has been a concern when it comes to browsers and their associated standards. It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.
EME, of course, brings the anti-circumvention laws into play, and as you may know anti-circumvention laws have a history of being used for purposes different than the original intent (i.e., protecting content). For example, soon after it was released, the U.S. anti-circumvention law was quickly by manufacturers of inkjet printers and garage-door openers to try and block out aftermarket competitors (generic ink, and generic remote controls). The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected.
This week, Berners-Lee made important and stirring contributions to the net neutrality debate, appearing in this outstanding Web Foundation video and explaining how anti-competitive actions by ISPs endanger the things that made the web so precious and transformative.
Last week, Berners-Lee disappointed activists who'd asked for a modest compromise on DRM at the W3C, one that would protect competition and use standards to promote the same level playing field we seek in our Net Neutrality campaigns. Yesterday, EFF announced that it would formally appeal Berners-Lee's decision to standardize DRM for the web without any protection for its neutrality. In the decades of the W3C's existence, there has never been a successful appeal to one of Berners-Lee's decisions.
The odds are long here -- the same massive corporations that oppose effective net neutrality protections also oppose protections against monopolization of the web through DRM, and they can outspend us by orders of magnitude. But we're doing it, and we're fighting to win. That's because, like Tim Berners-Lee, we love the web and believe it can only continue as a force for good if giant corporations don't get to decide what we can and can't do with it.
In recent months, social media platforms—under pressure from a number of governments—have adopted new policies and practices to remove content that promotes terrorism. As the Guardian reported, these policies are typically carried out by low-paid contractors (or, in the case of YouTube, volunteers) and with little to no transparency and accountability. While the motivations of these companies might be sincere, such private censorship poses a risk to the free expression of Internet users.
As groups like the Islamic State have gained traction online, Internet intermediaries have come under pressure from governments and other actors, including the following:
- the Obama Administration;
- the U.S. Congress in the form of legislative proposals that would require Internet companies to report “terrorist activity” to the U.S. government;
- the European Union in the form of a “code of conduct” requiring Internet companies to take down terrorist propaganda within 24 hours of being notified, and via the EU Internet Forum;
- individual European countries such as the U.K., France and Germany that have proposed exorbitant fines for Internet companies that fail to take down pro-terrorism content; and,
- victims of terrorism who seek to hold social media companies civilly liable in U.S. courts for providing “material support” to terrorists by simply providing online platforms for global communication.
One of the coordinated industry efforts against pro-terrorism online content is the development of a shared database of “hashes of the most extreme and egregious terrorist images and videos” that the companies have removed from their services. The companies that started this effort—Facebook, Microsoft, Twitter, and Google/YouTube—explained that the idea is that by sharing “digital fingerprints” of terrorist images and videos, other companies can quickly “use those hashes to identify such content on their services, review against their respective policies and definitions, and remove matching content as appropriate.”
As a second effort, the same companies created the Global Internet Forum to Counter Terrorism, which will help the companies “continue to make our hosted consumer services hostile to terrorists and violent extremists.” Specifically, the Forum “will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the UN.” The Forum will focus on technological solutions; research; and knowledge-sharing, which will include engaging with smaller technology companies, developing best practices to deal with pro-terrorism content, and promoting counter-speech against terrorism.
Internet companies are also taking individual measures to combat pro-terrorism content. Google announced several new efforts, while both Google and Facebook have committed to using artificial intelligence technology to find pro-terrorism content for removal.Private censorship must be cautiously deployed
While Internet companies have a First Amendment right to moderate their platforms as they see fit, private censorship—or what we sometimes call shadow regulation—can be just as detrimental to users’ freedom of expression as governmental regulation of speech. As social media companies increase their moderation of online content, they must do so as cautiously as possible.
Through our project Onlinecensorship.org, we monitor private censorship and advocate for companies to be more transparent and accountable to their users. We solicit reports from users of when Internet companies have removed specific posts or other content, or whole accounts.
We consistently urge companies to follow basic guidelines to mitigate the impact on users’ free speech. Specifically, companies should have narrowly tailored, clear, fair, and transparent content policies (i.e., terms of service or “community guidelines”); they should engage in consistent and fair enforcement of those policies; and they should have robust appeals processes to minimize the impact on users’ freedom of expression.
Over the years, we’ve found that companies’ efforts to moderate online content almost always result in overbroad content takedowns or account deactivations. We, therefore, are justifiably skeptical that the latest efforts by Internet companies to combat pro-terrorism content will meet our basic guidelines.
A central problem for these global platforms is that such private censorship can be counterproductive. Users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules if, for example, their post includes an image of one of more than 600 “terrorist leaders” designated by Facebook. In one instance, a journalist from the United Arab Emirates was temporarily banned from the platform for posting a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s content moderators failed to grasp.
A more fundamental problem is that having narrow definitions is difficult. What counts as speech that “promotes” terrorism? What even counts as “terrorism”? These U.S.-based companies may look to the State Department’s list of designated terrorist organizations as a starting point. But Internet companies will sometimes go further. Facebook, for example, deactivated the personal accounts of Palestinian journalists; it did the same thing for Chechen independence activists under the guise that they were involved in “terrorist activity.” These examples demonstrate the challenges social media companies face in fairly applying their own policies.
A recent investigative report by ProPublica revealed how Facebook’s content rules can lead to seemingly inconsistent takedowns. The authors wrote: “[T]he documents suggest that, at least in some instances, the company’s hate-speech rules tend to favor elites and governments over grassroots activists and racial minorities. In so doing, they serve the business interests of the global company, which relies on national governments not to block its service to their citizens.” The report emphasized the need for companies to be more transparent about their content rules, and to have rules that are fair for all users around the world.Artificial intelligence poses special concerns
We are concerned about the use of artificial intelligence automation to combat pro-terrorism content because of the imprecision inherent in systems that automatically block or remove content based on an algorithm. Facebook has perhaps been the most aggressive in deploying AI in the form of machine learning technology in this context. The company’s latest AI efforts include using image matching to detect previously tagged content, using natural language processing techniques to detect posts advocating for terrorism, removing terrorist clusters, removing new fake accounts created by repeat offenders, and enforcing its rules across other Facebook properties such as WhatsApp and Instagram.
This imprecision exists because it is difficult for humans and machines alike to understand the context of a post. While it’s true that computers are better at some tasks than people, understanding context in written and image-based communication is not one of those tasks. While AI algorithms can understand very simple reading comprehension problems, they still struggle with even basic tasks such as capturing meaning in children’s books. And while it’s possible that future improvements to machine learning algorithms will give AI these capabilities, we’re not there yet.
Google’s Content ID, for example, which was designed to address copyright infringement, has also blocked fair uses, news reporting, and even posts by copyright owners themselves. If automatic takedowns based on copyright are difficult to get right, how can we expect new algorithms to know the difference between a terrorist video clip that’s part of a satire and one that’s genuinely advocating violence?
Until companies can publicly demonstrate that their machine learning algorithms can accurately and reliably determine whether a post is satire, commentary, news reporting, or counter-speech, they should refrain from censoring their users by way of this AI technology.
Even if a company were to have an algorithm for detecting pro-terrorism content that was accurate, reliable, and had a minimal percentage of false positives, AI automation would still be problematic because machine learning systems are not robust to distributional change. Once machine learning algorithms are trained, they are as brittle as any other algorithm, and building and training machine learning algorithms for a complex task is an expensive, time-intensive process. Yet the world that algorithms are working in is constantly evolving and soon won’t look like the world in which the algorithms were trained.
This might happen in the context of pro-terrorism content on social media: once terrorists realize that algorithms are identifying their content, they will start to game the system by hiding their content or altering it so that the AI no longer recognizes it (by leaving out key words, say, or changing their sentence structure, or a myriad of other ways—it depends on the specific algorithm). This problem could also go the other way: a change in culture or how some group of people express themselves could cause an algorithm to start tagging their posts as pro-terrorism content, even though they’re not (for example, if people co-opted a slogan previously used by terrorists in order to de-legitimize the terrorist group).
We strongly caution companies (and governments) against assuming that technology will be the panacea in identifying pro-terrorism content, because this technology simply doesn’t yet exist.Is taking down pro-terrorism content actually a good idea?
Apart from the free speech and artificial intelligence concerns, there is an open question of efficacy. The sociological assumption is that removing pro-terrorism content will reduce terrorist recruitment and community sympathy for those who engage in terrorism. In other words, the question is not whether terrorists are using the Internet to recruit new operatives—the question is whether taking down pro-terrorism content and accounts will meaningfully contribute to the fight against global terrorism.
Governments have not sufficiently demonstrated this to be the case. And some experts believe this absolutely not to be the case. For example, Michael German, a former FBI agent with counter-terrorism experience and current fellow at the Brennan Center for Justice, said, “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.” In fact, as we’ve argued before, censoring the content and accounts of determined groups could be counterproductive and actually result in pro-terrorism content being publicized more widely (a phenomenon known as the Streisand Effect).
Additionally, permitting terrorist accounts to exist and allowing pro-terrorism content to remain online, including that which is publicly available, may actually be beneficial by providing opportunities for ongoing engagement with these groups. For example, a Kenyan government official stated that shutting down an Al Shabaab Twitter account would be a bad idea: “Al Shabaab needs to be engaged positively and [T]witter is the only avenue.”
Keeping pro-terrorism content online also contributes to journalism, open source intelligence gathering, academic research, and generally the global community’s understanding of this tragic and complex social phenomenon. On intelligence gathering, the United Nations has said that “increased Internet use for terrorist purposes provides a corresponding increase in the availability of electronic data which may be compiled and analysed for counter-terrorism purposes.”In conclusion
While we recognize that Internet companies have a right to police their own platforms, we also recognize that such private censorship is often in response to government pressure, which is often not legitimately wielded.
Governments often get private companies to do what they can’t do themselves. In the U.S., for example, pro-terrorism content falls within the protection of the First Amendment. Other countries, many of which do not have similarly robust constitutional protections, might nevertheless find it politically difficult to pass speech-restricting laws.
Ultimately, we are concerned about the serious harm that sweeping censorship regimes—even by private actors—can have on users, and society at large. Internet companies must be accountable to their users as they deploy policies that restrict content.
First, they should make their content policies narrowly tailored, clear, fair, and transparent to all—as the Guardian’s Facebook Files demonstrate, some companies have a long way to go.
Second, companies should engage in consistent and fair enforcement of those policies.
Third, companies should ensure that all users have access to a robust appeals process—content moderators are bound to make mistakes, and users must be able to seek justice when that happens.
Fourth, until artificial intelligence systems can be proven accurate, reliable and adaptable, companies should not deploy this technology to censor their users’ content.
Finally, we urge those companies that are subject to increasing governmental demands for backdoor censorship regimes to improve their annual transparency reporting to include statistics on takedown requests related to the enforcement of their content policies.
When you attack the Internet, the Internet fights back.
Today, the Internet went all out in support of net neutrality. Hundreds of popular websites featured pop-ups suggesting that those sites had been blocked or throttled by Internet service providers. Some sites got hilariously creative—Twitch replaced all of its emojis with that annoying loading icon. Netflix shared GIFs that would never finish loading. PornHub simply noted that “slow porn sucks.”
Together, we painted an alarming picture of what the Internet might look like if the FCC goes forward with its plan to roll back net neutrality protections: ISPs prioritizing their favored content sources and deprioritizing everything else. (Fight for the Future has put together a great collection of examples of how sites participated in the day of action.)
Today has been about Internet users across the country who are afraid of large ISPs getting too much say in how we use the Internet. Voices ranged from huge corporations to ordinary Internet users like you and me.
Together with Battle for the Net and other friends, we delivered 1.6 million comments to the FCC, breaking the record we set during Internet Slowdown Day in 2014. The message was clear: we all rely on the Internet. Don’t dismantle net neutrality protections.
If you haven’t added your voice yet, it’s not too late. Take a few moments to tell the FCC why net neutrality is important to you. If you already have, take a moment to encourage your friends to do the same.
Here are just a few examples of what Team Internet has been saying about net neutrality today.
“We live in an uncompetitive broadband market. That market is dominated by a handful of giant corporations that are being given the keys to shape telecom policy. The big internet companies that might challenge them are doing it half-heartedly. And [FCC Chairman] Ajit Pai seems determined to offer up a massive corporate handout without listening to everyday Americans.
“Is this what you want? Does this sound like a path toward better, faster, cheaper internet access? Toward better products and services in a more competitive market? To me, it sounds like Americans need to demand that our government actually hear our concerns, look at our skyrocketing bills, and make real policy that respects us, instead of watching the staff of an unelected official laugh as he ignores us. It sounds like we need to flood the offices of the FCC and Congress with calls and paperwork, demanding to know how giving handouts to huge corporations will help us.”
“Title II net neutrality protections are the civil rights and free speech rules for the internet. When traditional media outlets refuse to pay attention, Black, indigenous, queer and trans internet users can harness the power of the Internet to fight for lives free of police brutality and discrimination. This is why we’ll never stop fighting for enforcement of the net neutrality rules we fought for and saw passed by the FCC two years ago. There’s too much at stake to urge anything less.”
“We’re still picking ourselves off the floor from all the laughing we did when AT&T issued a press release this afternoon announcing that it was joining the ‘Day of Action for preserving and advancing the open internet.’
“If only it were true. In reality, AT&T is just a company that is deliberately misleading the public. Their lobbyists are lying. They want to kill Title II — which gives the FCC the authority to actually enforce net neutrality — and are trying to sell a congressional ‘compromise’ that would be as bad or worse than what the FCC is proposing. No thanks.”
“Everyone except these ISPs benefits from an open Internet… that’s it. It’s like a handful of companies. Not only is this about business—and it is about business and innovation—it’s also about freedom of speech.”
“No matter what, do not get discouraged or retreat into a state of silence and inaction. There are many like me who are listening and the role each of us plays is vital. We are not alone in believing that the FCC should be a governmental agency ‘of the people, by the people, and for the people.’”
To everyone who has participated in today’s day of action, thank you.
Every year, EFF has lawyers with its Coders’ Rights Project on hand in Las Vegas at Black Hat, B-Sides and DEF CON for security researchers with legal questions about their research or presentations. EFF’s Coders’ Rights Project protects programmers, researchers, hackers, and developers engaged in cutting-edge exploration of technology. Security and encryption researchers help build a safer future for all of us using digital technologies, but too many legitimate researchers face serious legal challenges that prevent or inhibit their work.
The 2017 summer security conference legal team will include:
- Staff Attorney Kit Walsh, who works on exemptions protecting security research and vehicle repair, along with a host of other beneficial activities threatened by Section 1201, the anti-circumvention provision of the Digital Millennium Copyright Act (DMCA).
- Criminal Defense Staff Attorney Stephanie Lacambra, a former Federal and San Francisco Public Defender who has turned her expertise toward defending your civil liberties online.
- Senior Staff Attorney Nate Cardozo, a Computer Fraud and Abuse Act expert who works on issues including the Wassenaar Arrangement, cryptography, hardware hacking, and electronic privacy law.
- Deputy Executive Director and General Counsel Kurt Opsahl, who leads the Coders’ Rights Project and has been helping security researchers present at the summer security conferences since DEF CON was at the Alexis Park.
If you are wondering about whether your research came into a legal gray area, or concerned that the vendor will threaten legal action, please reach out to firstname.lastname@example.org. All EFF legal consultations are pro bono (free), part of our commitment to help the security researcher community. You can also stop by the EFF booths at each conference to make an appointment with one of our attorneys, though we highly recommend contacting us as far in advance of your talk as possible.
And as always, even if you don’t have a legal question, come say hi at the booth or watch one of our talks at DEF CON