Electronic Freedom Foundation
Website blocking to deal with alleged copyright infringement is like cutting off your hand to deal with a papercut. Sure, you don’t have a papercut anymore, but you’ve also lost a lot more than you’ve gained. The latest country to consider a website blocking proposal is Japan, and EFF has responded to the call for comment by sharing all the reasons that cutting off websites is a terrible solution for copyright violations.
In response to infringement of copyrighted material, specifically citing a concern for manga, the government of Japan began work on a proposal that would make certain websites inaccessible in Japan. We’ve seen proposals like this before, most recently in the European Union’s Article 13.
In response to Japan’s proposal, EFF explained that website blocking is not effective at the stated goal of protecting artists and their work. First, it can be easily circumvented. Second, it ends up capturing a lot of lawful expression. Blocking an entire website does not distinguish between legal and illegal content, punishing both equally. Blocking and filtering by governments has frequently been found to violate national and international principles of free expression [pdf].
EFF also shared the research leading Internet engineers did in response to a potential U.S. law that would have enabled website blocking. They said that website blocking would lead to network errors and security problems.
According to numerous studies, the best answer to the problem of online infringement is providing easy, lawful alternatives. Doing this also has the benefit of not penalizing legitimate expression the way blocking does.
Quite simply, website blocking doesn’t work, violates the right to free expression, and breaks the Internet. Japan shouldn’t go down this path but look to proven alternatives.
It’s already much too difficult to invalidate bad patents—the kind that never should have been issued in the first place. Now, unfortunately, the Patent Office has proposed regulation changes that will make it even harder. That’s the wrong path to take. This week, EFF submitted comments opposing the Patent Office’s proposal.
Congress created some new kinds of Patent Office proceedings as part of the America Invents Act (AIA) of 2011. That was done with the goal of improving patent quality by giving third parties the opportunity to challenge patents at the Patent Trial and Appeal Board, or PTAB. EFF used one of these proceedings, known as inter partes review, to successfully challenge a patent that had been used to sue podcasters.
Congress didn’t explicitly say how these judges should interpret patent claims in AIA proceedings. But the Patent Office, until recently, read the statute as EFF still does: it requires the office to interpret patent claims in PTAB challenges the same way it does in all other proceedings. That approach requires giving the words of a patent claim their broadest reasonable interpretation (BRI). That’s different than the approach used in federal courts, which apply a standard that can produce a claim of narrower scope.
Using the BRI approach in AIA proceedings makes sense. Critically, it ensures the Patent Office reviews a wide pool of prior art (publications and products that pre-date the patent application). If the patent owner thinks this pool is too broad, it can amend claims to narrow their scope and avoid invalidating prior art. Requiring patent owners to amend their claims to avoid invalidating prior art encourages innovation and deters baseless litigation by giving the public clearer notice about what the patent does and does not claim.
But you don’t have to take our word for it. Barely two years ago, the Patent Office made the same argument to the Supreme Court to justify the agency’s use of the BRI approach in AIA proceedings. The Supreme Court agreed. In Cuozzo v. Lee [PDF], the court upheld the agency’s approach based on the text and structure of the AIA, a century of agency practice, and considerations of fairness and efficiency.
After successfully convincing the Supreme Court that the BRI standard should apply in AIA proceedings, why has the PTO changed its mind? Unfortunately, the Patent Office’s notice says little to explain its sudden change of course. Nor does it offer any reasons why this change would improve patent quality, or the efficiency of patent litigation. Apparently, the Patent Office assumes minimizing differences between two deliberately different types of proceedings will be more efficient. That assumption is flawed. The PTAB’s interpretation of claim language will only be relevant to a district court if similar terms are in dispute. If not, the change will only ensure more lawsuits, based on bad patents, clog up the courts.
The timing of the Patent Office’s proposal may hint at its impetus. When the agency adopted and argued for the BRI standard, the Director was Michelle Lee. On February 8, 2018, Andrei Iancu became Director. Three months later, on May 9, the Patent Office proposed abandoning the BRI standard. In his keynote speech, Director Iancu referenced unfounded criticisms of AIA proceedings, from “some” who, “pointing to the high invalidation rates . . . hate the new system with vigor, arguing that it’s an unfair process that tilts too much in favor of the petitioner.” The Patent Office’s sudden change of view on this topic may be a capitulation to these unfounded criticisms and a sign of further policy changes to come.
We hope the Patent Office will reconsider its proposal, after considering our comments, as well as those submitted by the R Street Institute and CCIA, a technology trade group. Administrative judges must remain empowered to weed out those patents that should never have issued in the first place.
Should Your Company Help ICE? “Know Your Customer” Standards for Evaluating Domestic Sales of Surveillance Equipment
Employees at Google, Microsoft, and Amazon have raised public concerns about those companies assisting U.S. military, law enforcement, and the Immigration and Customs Enforcement Agency (ICE) in deploying various kinds of surveillance technologies.
These public calls from employees raise important questions: what steps should a company take to ensure that government entities who purchase or license their technologies don’t misuse them? When should they refuse to sell to a governmental entity?
Tech companies must step up and ensure that they aren’t assisting governments in committing human rights abuses.
While the specific context of U.S. law enforcement using new surveillance technologies is more recent, the underlying questions aren’t. In 2011, EFF proposed a basic Know Your Customer framework for these questions. The context then was foreign repressive governments’ use of the technology from U.S. and European companies to facilitate human rights abuses. EFF’s framework was cited favorably by the United Nations in its implementation guide for technology companies for its own Guiding Principles on Business and Human Rights.
Now, those same basic ideas about investigation, auditing, and accountability can be, and should be, deployed domestically.
Put simply, tech companies, especially those selling surveillance equipment, must step up and ensure that they aren’t assisting governments in committing human rights, civil rights and civil liberties abuses. This obligation applies whether those governments are foreign or domestic, federal or local.
One way tech companies can navigate this difficult issue is by adopting a robust Know Your Customer program, modeled on requirements that companies already have to follow in the export control and anti-bribery context. Below, we outline our proposal for sales to foreign governments from 2011, with a few updates to reflect shifting from an international to domestic focus. Employees at companies that sell to government agencies, especially agencies with a record as troubling as ICE, may want to advocate for this as a process to protect against future corporate complicity.
We propose a simple framework:
- Companies selling surveillance technologies to governments need to affirmatively investigate and "know your customer" before and during a sale. We suggest customer investigations similar to what many of these companies are already required to do under the Foreign Corrupt Practices Act and the export regulations for their foreign customers.
- Companies need to refrain from participating in transactions where their "know your customer" investigations reveal either objective evidence or credible concerns that the technologies provided by the company will be used to facilitate governmental human or civil rights or civil liberties violations.
This framework can be implemented voluntarily, and should include independent review and auditors, employee participation, and public reporting. A voluntary approach can be more flexible as technologies change and situations around the world shift. Nokia Siemens Networks has already adopted a Human Rights Policy that incorporates some of these guidelines. In a more recent example, Google's AI principles contain many of these steps along with guidance about how they should be applied.
If companies don’t act on their own, however, and don’t act with convincing transparency and commitment, then a legal approach may be necessary. Microsoft has already indicated that it not only would be open to a legal (rather than voluntary) approach, but that such an approach is necessary. For technology companies to be truly accountable, a legal approach can and should include extending liability to companies that knowingly and actively facilitate governmental abuses, whether through aiding and abetting liability. EFF has long advocated for corporate liability for aiding governmental surveillance, including in the Doe v. Cisco case internationally and in our Hepting v. AT&T case domestically.
Elaborating on the basic framework above, here are some guidelines:
[Note: These guidelines use key terms—Technologies, Transaction, Company, and Government—that are defined at the bottom and capitalized throughout.]
Affirmatively Investigate: The Company must have a process, led by a specifically designated person, to engage in an ongoing evaluation of whether Technologies or Transactions will be, or are being, used to aid, facilitate, or cover up human rights, civil rights, and civil liberties abuses (“governmental abuses”).
This process needs to be more than lip service and needs to be verifiable (and verified) by independent outsiders. It should also include concerned employees, who deserve to have a voice in ensuring that the tools they develop are not misused by governments. This must be an organizational commitment, with effective enforcement mechanisms in place. It must include tools, training, and education of personnel, plus career consequences when the process is not followed. In addition, in order to build transparency and solidarity, a Company that decides to refuse (or continue) further service on the basis of these standards should, where possible, report that decision publicly so that the public understands the decisions and other companies can have the benefit of their evaluation.
The investigation process should include, at a minimum:
- Review what the purchasing Government and Government agents, and Company personnel and agents, are saying about the use of the Technologies, both before and during any Transaction. This includes, among other things, review of sales and marketing materials, technical discussions and questions, presentations, technical and contractual specifications, and technical support conversations or requests. For machine learning or AI applications, it must include review of training data and mechanisms to identify what questions the technology will be asked to answer or learn about. Examples include:
- Evidence in the Doe v. Cisco case, arising from Cisco’s participation with the Chinese government in building surveillance tools aimed at identifying Falun Gong, are the presentations made by Cisco employees that brag about how their technology can help the Chinese Government combat the “Falun Gong Evil Religion.”
- In 2016, the ACLU of Northern California published a report outlining how Geofeedia advertised that its location-based, social media surveillance system could be used by government offices and the police to monitor the protest activities of activists, including specifically of color, raising core First Amendment concerns.
- Review the capabilities of the Technology for human rights abuses and consider possible mitigation measures, both technical and contractual.
- For instance, the fact that facial recognition software misidentifies people of color at a much higher rate than white people is a clear signal that the Technology is highly vulnerable to governmental abuses.
- Note that we do not believe that Companies should be held responsible merely for selling general purpose or even dual-use products to the government that are later misused, as long as the Company conducted a sufficient investigation that did not reveal governmental abuse as a serious risk.
- Review the Government’s laws, regulations, and practices regarding surveillance, including approval of purchase of surveillance equipment, laws concerning interception of communications, access to stored communications, due process requirements, and other relevant legal process. For sellers of machine learning and artificial intelligence tools, the issue of whether the tool can be subject to true due process requirements–that is, whether a person impacted by a system's decision can have sufficient access to be able to determine how an adverse decision was made–should be a key factor.
- For instance, Nokia Siemens says that it will only provide core lawful intercept (i.e. surveillance) capabilities that are legally required and are "based on clear standards and a transparent foundation in law and practice."
- In some instances, as with AI, this review may include interpreting and applying legal and ethics principles, rather than simply waiting for “generally accepted” ones to emerge, since law enforcement often implements technologies before those rules are clear. EFF and a broad international coalition have already interpreted key international legal doctrines on mass surveillance in the Necessary and Proportionate Principles.
- For domestic uses, this review must include an evaluation of whether sufficient local control is in place. EFF and the ACLU have worked to ensure this with a set of proposals called Community Control Over Police Surveillance or (CCOPS). If local control and protections are not yet in place, the company should decline to provide the technology until they are, especially in locations in which the population is already at risk from surveillance.
- Review credible reports about the Government and its human rights record, including news or other reports from nongovernmental sources or local sources that indicate whether the Government engages in the use or misuse of surveillance capabilities to conduct human rights abuses.
- Internationally, this can include U.S. State Department reports as well as other governmental and U.N. reports, as well as those by well-respected NGOs and journalists.
- Domestically, this can include all of the above, plus Department of Justice reports about police departments, like the ones issued about Ferguson, MO, and San Francisco, CA.
- For both, this review can and should included nongovernmental and journalist sources as well.
Refrain from Participation: The Company must not participate in, or continue to participate in, a Transaction or provide a Technology if it appears reasonably foreseeable that the Transaction or Technology will directly or indirectly facilitate governmental abuses. This includes cases in which:
- The portion of the Transaction that the Company is involved in or the specific Technology provided includes building, customizing, configuring, or integrating into a system that is known or is reasonably foreseen to be used for governmental abuses, whether done by the Company or by others.
- The portion of the Government that is engaging in the Transaction or overseeing the Technologies has been recognized as committing governmental abuses using or relying on similar Technologies.
- The Government's overall record on human rights generally raises credible concerns that the Technology or Transaction will be used to facilitate governmental abuses.
- The Government refuses to incorporate contractual terms confirming the intended use or uses of the Technology, confirming local control similar to the CCOPS Proposals, or allowing the auditing of their use by the Government purchasers in sales of surveillance Technologies.
- The investigation reveals that the technology is not capable of operating in a way that protects against abuses, such as when due process cannot be guaranteed in AI/ML decision-making, or bias in training data or facial recognition outcome is endemic or cannot be corrected.
Key Definitions and the Scope of the Process: Who should undertake these steps? The field is actually pretty small: Companies engaging in Transactions to sell or lease usrveillance Technologies to Governments, defined as follows:
- “Governmental Abuses” includes governmental violations of international human rights law, international humanitarian law, domestic civil rights violations, domestic civil liberties violations and other legal violations that involve governments doing harm to people. As noted above, in some instances involving new or evolving technology or uses of technology, this may include interpreting and applying those principles and laws, rather than simply waiting for legal interpretations to emerge.
- “Transaction” includes all sales, leases, rental or other types of arrangements where a Company, in exchange for any form of payment or other consideration, either provides or assists in providing Technologies, personnel or non-technological support to a Government. This also includes providing of any ongoing support to Governments such as software or hardware upgrades, consulting or similar services.
- “Technologies” include all systems, technologies, consulting services, and software that, through marketing, customization, government contracting processes, or otherwise are known to the company to be used or be reasonably likely to be used to surveil third parties. This includes technologies that intercept communications, packet-sniffing software, deep packet inspection technologies, facial recognition systems, artificial intelligence and machine learning systems aimed at facilitating surveillance, certain biometrics devices and systems, voting systems, and smart meters.
- Note that EFF does not believe that general purpose technologies should be included in this, unless the Company has a clear reason to believe that they will be used for surveillance.
- Surveillance technologies like facial recognition systems are generally not sold to Governments off the shelf. Technology providers are almost inevitably involved in training, supporting, and developing these tools for specific governmental end users, like a specific law enforcement agency.
- “Company” includes subsidiaries, joint ventures (especially joint ventures directly with government entities), and other corporate structures where the Company has significant holdings or has operational control.
- “Government” includes all segments of government: local law enforcement, state law enforcement, and federal and even military agencies. It includes formal, recognized governments, including State parties to the United Nations.
- It also includes governing or government-like entities, such as the Chinese Communist Party or the Taliban and other nongovernmental entities that effectively exercise governing powers over a country or a portion of a country.
- For these purposes “Government” includes indirect sales through a broker, reseller, systems integrator, contractor, or other intermediary or multiple intermediaries if the Company is aware or should know that the final recipient of the Technology is a Government.
If tech companies want to be part of making the world better, they must commit to making business decisions that consider potential governmental abuses.
This framework is similar to the one in the current U.S. export controls and also to the steps required by Companies under the Foreign Corrupt Practices Act. It is based on the recognition that companies involved in domestic government contracting, especially for to the kinds of expensive, service-heavy surveillance systems provided by technology companies, are already participating in a highly regulatory process with many requirements. For larger federal contractors, these include providing complex cost or pricing data, doing immigration checks and conducting drug testing. Asking these companies to ensure that they are not facilitating governmental abuses is not a heavy additional lift.
Regardless of how tech companies get there, if they want to be part of making the world better, not worse, they must commit to making business decisions that consider potential governmental abuses. No reasonable company wants to be known as the company that knowingly helps facilitate governmental abuses. Technology workers are making it clear that they don’t want to work for those companies either. While the blog posts and public statements from a few of the tech giants are a good start, it’s time all tech companies take real, enforceable steps to ensure that they aren’t serving as "abuse’s little helpers."Related Cases: Doe I v. Cisco
On Tuesday, we wrote a report about how the Irvine Company, a private real estate development company, has collected automated license plate reader (ALPR) data from patrons of several of its shopping centers, and is providing the collected data to Vigilant Solutions, a contractor notorious for its contracts with state and federal law enforcement agencies across the country.
The Irvine Company initially declined to respond to EFF’s questions, but after we published our report, the company told the media that it only collects information at three malls in Orange County (Irvine Spectrum Center, Fashion Island, and The Marketplace) and that Vigilant Solutions only provides the data to three local police departments (the Irvine, Newport Beach, and Tustin police departments).
The next day, Vigilant Solutions issued a press release claiming that the Irvine Company ALPR data actually had more restricted access (in particular, denying transfers to the U.S. Immigration & Customs Enforcement [ICE] agency), and demanding EFF retract the report and apologize. As we explain below, the EFF report is a fair read of the published ALPR policies of both the Irvine Company and Vigilant Solutions. Those policies continue to permit broad uses of the ALPR data, far beyond the limits that Vigilant now claims exist.
Vigilant Solutions’ press release states that the Irvine Company’s ALPR data "is shared with select law enforcement agencies to ensure the security of mall patrons,” and that those agencies "do not have the ability in Vigilant Solutions' system to electronically copy this data or share this data with other persons or agencies, such as ICE.”
This is important because the published policies are extremely broad. To begin with, the Irvine Company policy explains that “[t]he automatic license plate readers used by Irvine or its contractors are programmed to transmit the ALPR Information to” "a searchable database of information from multiple sources ('ALPR System') operated by Vigilant Solutions, LLC” "upon collection."
Moreover, the Irvine Company policy still says that Vigilant Solutions "may access and use the ALPR System for any of the following purposes: (i) to provide ALPR Information to law enforcement agencies (e.g., for identifying stolen vehicles, locating suspected criminals or witnesses, etc.); or (ii) to cooperate with law enforcement agencies, government requests, subpoenas, court orders or legal process.”
Under this policy, the use of ALPR data is not limited only to uses that "ensure the security of mall patrons,” nor even to any particular set of law enforcement agencies, select or otherwise. The policy doesn’t even require legal process; instead it allows access where the “government requests.”
Likewise, Vigilant Solutions’ policy states that the “authorized uses of the ALPR system” include the very broad category of "law enforcement agencies for law enforcement purposes,” and—unlike the policy it claims to have in their press release—does not state any restriction on access by any particular law enforcement agency or to any particular law enforcement purpose. ICE is a law enforcement agency.
We appreciate that Vigilant Solutions is now saying that the information collected from customers of the Irvine Spectrum Center, Fashion Island, and The Marketplace will never be transited to ICE and will only be used to ensure the security of mall patrons. But if they want to put that issue to rest, they should, at a minimum, update their published ALPR policies.
Better yet, given the inherent risks with maintaining databases of sensitive information, Irvine and Vigilant Solutions should stop collecting information about mall patrons and destroy all the collected information. As a mass-surveillance technology, ALPR can be used to gather information on sensitive populations, such as immigrant drivers, and may be misused. Further, once collected, ALPR may be accessible by other government entities—including ICE—through various legal processes.
In addition, Vigilant Solutions’ press release takes issue with EFF’s statement that "Vigilant Solutions shares data with as many as 1,000 law enforcement agencies nationwide.” According to Vigilant Solutions press release, "Vigilant Solutions does not share any law enforcement data. The assertion is simply untrue. Law enforcement agencies own their own ALPR data and if they choose to share it with other jurisdictions, the[y] can elect to do so.”
This is a distinction without a difference.
As Vigilant Solutions’ policy section on "Sale, Sharing or Transfer of LPR Data” (emphasis added) states, “the company licenses our commercially collected LPR data to customers,” "shares the results of specific queries for use by its customers” and "allows law enforcement agencies to query the system directly for law enforcement purposes.” The only restriction is that, for information collected by law enforcement agencies, "we facilitate sharing that data only with other LEAs … if sharing is consistent with the policy of the agency which collected the data." If Vigilant Solutions only meant to dispute “sharing” with respect to information collected by law enforcement, this is a non-sequitur, as the Irvine Company is not a law enforcement agency.
Nevertheless, Vigilant Solutions’ dispute over whether it truly “shares” information puts an Irvine Company letter published yesterday in an interesting light. The Irvine Company reportedly wrote to Vigilant Solutions to confirm that “Vigilant has not shared any LPR Data generated by Irvine with any person or agency other than the Irvine, Newport Beach and Tustin police departments and, more specifically you have not shared any such data with U.S. Immigration and Customs Enforcement (ICE).”
Under the cramped “sharing” definition in the Vigilant Solutions press release, any such “confirmation” would not prevent Vigilant from licensing the Irvine data, sharing results of specific queries, allowing law enforcement to query the system directly, or “facilitate sharing” with ICE if the police department policies allowed it. If Irvine and Vigilant didn’t mean to allow this ambiguity, they should be more clear and transparent about the actual policies and restrictions.
The rest of the press release doesn’t really need much of a response, but we must take issue with one further claim. Vigilant Solutions complains that, while EFF reached out several times to the Irvine Company (with no substantive response), EFF did not reach out to them directly about the story. This assertion is both misleading and ironic.
A year ago, EFF sent a letter to Vigilant Solutions with 31 questions about its policies and practices. To date, Vigilant Solutions has not responded to a single question. In addition, Vigilant Solutions had already told the press, “as policy, Vigilant Solutions is not at liberty to discuss or share any contractual details. This is a standard agreement between our company, our partners, and our clients.”
Indeed, Vigilant Solutions has quite a history of fighting EFF’s effort to shine a light on ALPR practices, issuing an open letter to police agencies taking EFF to task for using Freedom of Information Act and Public Records Act requests to uncover information on how public agencies collect and share data. A common Vigilant Solutions contract has provisions where the law enforcement agency “also agrees not to voluntarily provide ANY information, including interviews, related to Vigilant, its products or its services to any member of the media without the express written consent of Vigilant.”
Vigilant Solutions has built its business on gathering sensitive information on the private activities of civilians, packaging it, and making it easily available to law enforcement. It’s deeply ironic that Vigilant gets so upset when someone wants to take a closer look at its own practices.
The hope that filled Egypt's Internet after the 2011 January 25 uprising has long since faded away. In recent years, the country's military government has instead created a digital dystopia, pushing once-thriving political and journalism communities into closed spaces or offline, blocking dozens of websites, and arresting a large number of activists who once relied upon digital media for their work.
In the past two years, we’ve witnessed the targeting of digital rights defenders, journalists, crusaders against sexual harassment, and even poets, often on trumped-up grounds of association with a terrorist organization or “spreading false news.” Now, the government has put forward a new law that will result in its ability to target and persecute just about anyone who uses digital technology.
The new 45-article cybercrime law, named the Anti-Cyber and Information Technology Crimes law, is divided into two parts. The first part of the bill stipulates that service providers are obligated to retain user information (i.e. tracking data) in the event of a crime, whereas the second part of the bill covers a variety of cybercrimes under overly broad language (such as “threat to national security”).
Article 7 of the law, in particular, grants the state the authority to shut down Egyptian or foreign-based websites that “incite against the Egyptian state” or “threaten national security” through the use of any digital content, media, or advertising. Article 2 of the law authorizes broad surveillance capabilities, requiring telecommunications companies to retain and store users’ data for 180 days. And Article 4 explicitly enables foreign governments to obtain access to information on Egyptian citizens and does not make mention of requirements that the requesting country have substantive data protection laws.
The implications of these articles are described in detail in a piece written by the Association for Freedom of Thought and Expression (AFTE) and Access Now. In the piece, the organizations state “These laws serve to close space for civil society and deprive citizens of their rights, especially the right to freedom of expression and of association” and call for the immediate withdrawal of the law.
We agree—the law must be withdrawn. It would appear that the bill’s underlying goal is to set up legal frameworks to block undesirable websites, intimidate social media users, and solidify state control over websites. By expanding government’s power to block websites, target individuals for their speech, and surveil citizens, the Egyptian parliament is helping the already-authoritarian executive branch inch ever closer toward a goal of repressing anyone who dares speak their mind. The overly broad language contained throughout the law will lead to the persecution of individuals who engage in online speech and create an atmosphere of self-censorship, as others shy away from using language that may be perceived as threatening to the government.
The Egyptian law comes at a time of increased repression throughout the Middle East. In the wake of the 2011 uprisings, a number of countries in the region began to crack down on online speech, implementing cybercrime-related laws that utilize broad language to ensure that anyone who steps out of line can be punished.
In a 2015 piece for the Committee to Protect Journalists, Courtney Radsch wrote: “Cybercrime legislation, publicly justified as a means of preventing terrorism and protecting children, is a growing concern for journalists because the laws are also used to restrict legitimate speech, especially when it is critical or embarrassing to authorities.”
A June 2018 report from the Gulf Center for Human Rights maps both legal frameworks and violations of freedom of expression in the six Gulf states, as well as Jordan, Syria, and Lebanon, noting that “The general trend for prosecution was that digital rights and freedoms were penalised and ruled as 'cybercrime' cases delegated to general courts. Verdicts in these cases have been either based on an existing penal code where cybercrime laws are absent, in the process of being drafted, or under the penal code and a cybercrime law.”
These are difficult times for free expression in the region. EFF continues to monitor the development of cybercrime and other relevant laws and offers our support to the many organizations in the region fighting back against these draconian laws.
When government agencies refuse to let the members of the public watch what they’re doing, drones can be a crucial journalistic tool. But now, some members of Congress want to give the federal government the power to destroy private drones it deems to be an undefined “threat.” Even worse, they’re trying to slip this new, expanded power into unrelated, must-pass legislation without a full public hearing. Worst of all, the power to shoot these drones down will be given to agencies notorious for their absence of transparency, denying access to journalists, and lack of oversight.
Back in June, the Senate Homeland Security and Governmental Affairs Committee held a hearing on the Preventing Emerging Threats Act of 2018 (S. 2836), which would give the Department of Homeland Security and the Department of Justice the sweeping new authority to counter privately owned drones. Congress shouldn’t grant DHS and DOJ such broad, vague authorities that allow them to sidestep current surveillance law.
The NDAA is a complicated and complex annual bill to reauthorize military programs and is wholly unrelated to both DHS and DOJ. Hiding language in unrelated bills is rarely a good way to make public policy, especially when the whole Congress hasn’t had a chance to vet the policy.
But most importantly, expanding the agencies’ authorities without requiring that they follow the Wiretap Act, Electronic Communications Privacy Act, and the Computer Fraud and Abuse Act raises large First and Fourth Amendment concerns that must be addressed.
Drones are a powerful tool for journalism and transparency. Today, the Department of Homeland Security routinely denies reporters access to detention centers along the southwest border. On the rare occasions DHS does allow entry, the visitors are not permitted to take photos or record video. Without other ways to report on these activities, drones have provided crucial documentation of the facilities being constructed to hold children. Congress should think twice before granting the DHS the authority to shoot drones down, especially without appropriate oversight and limitations.
If S. 2836 is rolled into the NDAA, it would give DHS the ability to “track,” “disrupt,” “control,” “seize or otherwise confiscate” any drone that the government deems to be a “threat,” without a warrant or due process. DHS and DOJ might interpret this vague and overbroad language to include the power to stop journalists from using drones to document government malfeasance at these controversial children’s detention facilities.
As we said before, the government may have legitimate reasons for engaging drones that pose an actual, imminent, and narrowly defined “threat.” Currently, the Department of Defense already has the authority to take down drones, but only in much more narrowly circumscribed areas directly related to enumerated defense missions. DHS and DOJ have not made it clear why existing exigent circumstance authorities aren’t enough. But even if Congress agrees that DHS and DOJ need expanded authority, that authority must be carefully balanced so as not to curb people’s right to use drones for journalism, free expression, and other non-criminal purposes.
EFF has been concerned about government misuse of drones for a long time. But drones also represent an important tool for journalism and activism in the face of a less-than-transparent government. We can’t hand the unchecked power to destroy drones to agencies not known for self-restraint, and we certainly can’t let Congress give them that power through an opaque, backroom process.
It’s easy to feel adrift these days. The rising tide of social unrest and political extremism can be overwhelming, but on EFF’s 28th birthday our purpose has never been more clear. With the strength of our numbers, we can fight against the scourge of pervasive surveillance, government and corporate overreach, and laws that stifle creativity and speech. That's why today we're launching the Shipshape Security membership drive with a goal of 1,500 new and renewing members. For two weeks only, you can join EFF for as little $20 and get special member swag that will remind you to keep your digital cabin shipshape.
Online Freedom Begins with You!
Digital security anchors your ability to express yourself, challenge ideas, and have candid conversations. It’s why EFF members fight for uncompromised online tools and ecosystems: the world can no longer resist tyranny without them. We also know that our impact is amplified when we approach security together and support one another. The future of online privacy and free expression depend on our actions today.
If you know people who care about online freedom, the Shipshape Security drive is a great time to encourage them to join EFF. On the occasion of our birthday, EFF has also released a new member t-shirt for this year featuring our fresh-from-the-oven logo. Members support EFF's work educating policymakers and the public with crucial analysis of the law, developing educational resources like Surveillance Self-Defense, developing software tools like Privacy Badger, empowering you with a robust action center, and doing incisive work in the courts to protect the public interest.
Before the rise of the Internet, a crew of pioneers established EFF to help the world navigate the great promise and dangerous possibilities of digital communications. Today, precisely 28 years later, EFF is the flagship nonprofit leading a tenacious movement to protect online rights. Support from the public makes it possible, and EFF refuses to back down.
Come hell or high water, EFF is fighting for your rights online. Lend your support and join us today.
DNA Collection is Not the Answer to Reuniting Families Split Apart by Trump’s “Zero Tolerance” Program
The Trump Administration’s “zero tolerance” program of criminally prosecuting all undocumented adult immigrants who cross the U.S.-Mexico border has had the disastrous result of separating as many as 3,000 children—many no older than toddlers—from their parents and family members. The federal government doesn’t appear to have kept track of where each family member has ended up. Now politicians, agency officials, and private companies argue DNA collection is the way to bring these families back together. DNA is not the answer.
Politicians argue DNA collection is the way to bring these families back together. DNA is not the answer.
Two main DNA-related proposals appear to be on the table. First, in response to requests from U.S. Representative Jackie Speier, two private commercial DNA-collection companies proposed donating DNA sampling kits to verify familial relationships between children and their parents. Second, the federal Department of Health and Human Services has said it is either planning to or has already started collecting DNA from immigrants, also to verify kinship.
Both of these proposals threaten not just the privacy, security, and liberty of undocumented immigrants swept up in Trump’s Zero Tolerance program but also the privacy, security, and liberty of everyone related to them.
Jennifer Falcon, communications director at RAICES, an organization that provides free and low-cost legal services to immigrant children, families, and refugees in Texas succinctly summarized the problem:
These are already vulnerable communities, and this would potentially put their information at risk with the very people detaining them. They’re looking to solve one violation of civil rights with something that could cause another violation of civil rights.Why is this a problem?
DNA reveals an extraordinary amount of private information about us. Our DNA contains our entire genetic makeup. It can reveal where our ancestors came from, who we are related to, our physical characteristics, and whether we are likely to get various genetically-determined diseases. Researchers have also theorized DNA may predict race, intelligence, criminality, sexual orientation, and even political ideology.
DNA collected from one person can be used to track down and implicate family members, even if those family members have never willingly donated their own DNA to a database. In 2012, researchers used genetic genealogy databases and publicly-available information to identify nearly 50 people from just three original anonymized samples. The police have used familial DNA searching to tie family members to unsolved crimes.
Once the federal government collects a DNA sample—no matter which agency does the collecting—the sample is sent to the FBI for storage, and the extracted profile is incorporated into the FBI’s massive CODIS database, which already contains over 13 million “offender” profiles (“detainees” are classified as “offenders”). It is next to impossible to get DNA expunged from the database, and once it’s in CODIS it is subject to repeated warrantless searches from all levels of state and federal law enforcement. Those searches have, in the past, implicated people for crimes they didn’t commit.Unanswered Questions
Both of the proposals to identify separated family members with DNA raise many unanswered questions. Here are a few we should be asking:
Who is actually collecting the DNA samples from parents and children?
Is it the federal government? If so, which agency? If it’s a private entity, which entity?
What legal authority do they have to collect DNA samples?
DHS still doesn’t appear to have legal authority to collect DNA samples from anyone younger than 14. Children younger than 14 should not be deemed to have consented to DNA collection. And under these circumstances, parents cannot consent to the collection of DNA from their children because the federal government has admitted it has already lost track of which children are related to which adults.
How are they collecting and processing the DNA?
Are they collecting a sample via a swab of the cheek? Is collection coerced or is it with the consent and assistance of the undocumented person? Once the sample is collected, how is it processed? Is it processed in a certified lab? Is it processed using a Rapid DNA machine? How is chain of custody tracked, and how is the collecting entity ensuring samples aren’t getting mixed up?
What happens to the DNA samples after they are collected, and who has access to them?
Are samples deleted after a match is found? If not, and if they are collected by a private genetics or genetic geneology company like 23 and Me or MyHerritage, do these companies get to hold onto the samples and add them to their databanks? Are there any limits on who can access them and for what purpose? If the federal government collects the samples, where is it storing them and who has access to them?
Will the DNA profiles extracted from the samples end up in FBI’s vast CODIS criminal DNA database?
Currently DHS does not have its own DNA database. Any DNA it collects goes to the FBI, where it may be searched by any criminal agency in the country.
Will the collected DNA be shared with foreign governments?
The U.S. government shares biometric data with its foreign partners. Will it share immigrant DNA? Will this be used to target immigrants if or when they are sent back home?
What if the separated family members aren’t genetically related or don’t represent a parent-child relationship?
How is the U.S. government planning to determine who is a “family member” once agencies have lost track of the families who traveled here together? What if the parent is a step-parent or legal guardian? What if the child is adopted? What if the adult traveling with the child is a more distant relative? Will they still be allowed to be reunited with their children?
These proposals to use DNA to reunite immigrant families aren’t new. In 2008, the United Nations High Commissioner for Refugees (UNHCR) looked at this exact problem. In a document titled DNA Testing to Establish Family Relationships in the Refugee Context, it recognized that DNA testing “can have serious implications for the right to privacy and family unity” and should be used only as a “last resort.” In 2012, we raised alarms about DHS’s proposals at that time to use DNA to verify claimed kinship in the refugee and asylum context. The concerns raised by DNA collection ten years ago have only heightened today.
The Trump administration shouldn’t be allowed to capitalize on the family separation crisis it created to blind us to these concerns. And well-meaning people who want to reunite families should consider other solutions to this crisis. Immigrant families shouldn’t have to trade the civil rights violation of losing their family members for the very real threats to privacy and civil liberties posed by DNA collection.
Free WiFi all across New York City? It might sound like a dream to many New Yorkers, until the public learned that it wasn’t “free” at all. LinkNYC, a communications network that is replacing public pay phones with WiFi kiosks across New York City, is paid for by advertising that tracks users, festooned with cameras and microphones, and has questionable processes for allowing the public to influence its data handling policies.
These kiosks also gave birth to ReThink LinkNYC, a grassroots community group that’s uniting New Yorkers from different backgrounds in standing up for their privacy. In a recent interview with EFF, organizers Adsilla Amani and Mari Dej described the organization as a “hodgepodge of New Yorkers” who were shocked by the surveillance-fueled WiFi kiosks entering their neighborhoods. More importantly, they saw opportunity. As Dej described, “As we began scratching the surface, [we] saw that this was an opportunity as well to highlight some of the problems that are largely invisible with data brokers and surveillance capitalism.”
ReThink LinkNYC, which has launched an informational website and hosts events across New York, has been pushing city officials for transparency and accountability. They have demanded a halt to construction on the kiosks until adequate privacy safeguards are enacted.
The group has already had some successes. As Dej described it, “We certainly got the attention of LinkNYC, and that itself is a victory – [they] know that there is an organized group of everyday peeps unhappy with the lack of transparency around the LinkNYC 'spy kiosks.’”
ReThink LinkNYC has thrived in part because it actively cultivated partnerships, and not just with the tech community. Dej noted, “Inasmuch as the structure of surveillance affects us all, all of us deserve to be aware, and welcomed into action. A movement needs to extend beyond the tech community.”
To other groups around the country that might be interesting in campaigning to defend civil liberties in their own communities, Amani advised organizers to examine the power structures they are opposing and cultivate personal connections: “Civic involvement remains a more or less fringe activity for a majority of people. So appeal to what human community is—feelings of connection, acceptance, creating a safe world for our children, and a chance to be creative, 'seen', and given a sense that one’s participation is valued. If we'd like our tech future to be cooperative (versus dominated by wealth or authoritarian styles), then that's how we organize. If we dedicate ourselves to unlearning the hierarchical behavioral model, we can more easily sense our power.”
Dej agreed, adding “We have the power, we just have yet to realize it.”
ReThink LinkNYC joined the Electronic Frontier Alliance (EFA) over a year ago, and has used the network to help connect with other digital rights activists in New York City, get assistance with event promotion, and discuss strategies. Dej shared that EFA has been useful for connecting with other activists, saying, “It helps us connect to other people and other parts of this issue that you wouldn’t think of right off the bat, like Cryptoparty, who gave us insight into the technology part of all this… It’s also good to see people working and that we’re not the only ones going through this struggle. There are other people fighting different parts of this system as hard as they can.”
The Electronic Frontier Alliance was launched in March 2016 to help inspire and connect community and campus groups across the United States to defend digital rights. While each group is independent and has its own focus areas, every member group upholds five principles:
- Free expression: people should be able to speak their minds to whomever will listen.
- Security: technology should be trustworthy and answer to its users.
- Privacy: technology should allow private and anonymous speech, and allow users to set their own parameters about what to share with whom.
- Creativity: technology should promote progress by allowing people to build on the ideas, creations, and inventions of others.
- Access to knowledge: curiosity should be rewarded, not stifled.
Interviews with ReThink LinkNYC were conducted by phone with follow up over email, and responses edited lightly for clarity.
A company that operates 46 shopping centers up and down California has been providing sensitive information collected by automated license plate readers (ALPRs) to Vigilant Solutions, a surveillance technology vendor that in turn sells location data to Immigrations & Customs Enforcement.
Automated license plate recognition is a form of mass surveillance in which cameras capture images of license plates, convert the plate into plaintext characters, and append a time, date, and GPS location. This data is usually fed into a database, allowing the operator to search for a particular vehicle’s travel patterns or identify visitors to a particular location. By adding certain vehicles to a “hot list,” an ALPR operator can receive near-real time alerts on a person’s whereabouts.
EFF contacted the Irvine Company with a series of questions about the surveillance program, including which malls deploy ALPRs and how much data has been collected and shared about its customers and employees. After accepting the questions via phone, Irvine Company did not provide further response or answer questions.
The Irvine Company's Shopping Centers in California:%3Ciframe%20width%3D%22600%22%20height%3D%22500%22%20scrolling%3D%22no%22%20frameborder%3D%22no%22%20src%3D%22https%3A%2F%2Ffusiontables.google.com%2Fembedviz%3Fq%3Dselect%2Bcol1%2Bfrom%2B1rD4SnEV0A5H8Omp5gVauQe7_oBVrJLMNp6buenLN%26amp%3Bviz%3DMAP%26amp%3Bh%3Dfalse%26amp%3Blat%3D34.871845934180776%26amp%3Blng%3D-119.24998871540208%26amp%3Bt%3D1%26amp%3Bz%3D6%26amp%3Bl%3Dcol1%26amp%3By%3D2%26amp%3Btmplt%3D2%26amp%3Bhml%3DONE_COL_LAT_LNG%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from google.com
The Irvine Company’s policy describes a troubling relationship between the retail world and the surveillance state. The cooperation between the two companies allows the government to examine the travel patterns of consumers on private property with little transparency and no consent from those being tracked. As private businesses, Vigilant Solutions and the Irvine Company are generally shielded from transparency measures such as the California Public Records Act. The information only came to light due to a 2015 law passed in California that requires ALPR operators—both public and private alike—to post their ALPR policies online. Malls in other states where no such law exists could well be engaged in similar violations of customer privacy without any public accountability.
In December 2017, ICE signed a contract with Vigilant Solutions to access its license-plate reader database. Data from Irvine Company’s malls directly feeds into Vigilant Solutions’ database system. This means that ICE can spy on mall visitors without their knowledge and receive near-real-time alerts when a targeted vehicle is spotted in a shopping center’s parking lot.
Vigilant Solutions’ dealings with ICE have come under growing scrutiny in California as the Trump administration accelerates its immigrant enforcement. The City of Alameda rejected a contract with Vigilant Solutions following community outcry over its contracts with ICE. The City of San Pablo put an expansion of its surveillance network on hold due to the same concerns.
But ICE isn’t the only agency accessing the data. Vigilant Solutions shares data with as many as 1,000 law enforcement agencies nationwide. Through its sister company, Digital Recognition Network, Vigilant Solutions also sells ALPR data to financial lenders, insurance companies, and debt collectors.
“Irvine is committed to limiting the access and use of ALPR Information in a manner that is consistent with respect for individuals' privacy and civil liberties,” the Irvine Company writes in its policy. “Accordingly, contractors used to collect ALPR Information on Irvine's behalf and Irvine employees are not authorized to access or use the ALPR Information or ALPR System.” And the Irvine Company says it deletes the data once it has been transmitted to Vigilant Solutions.
Although the Irvine Company pays lip service to civil liberties, the company undermines that position by allowing Vigilant Solutions to apply its own policy to the data. Vigilant Solutions does not purge data on a regular basis and instead “retains LPR data as long as it has commercial value.”
The Irvine Company must shut down its ALPR system immediately. By conducting this location surveillance and working with Vigilant Solutions, the company is putting not only immigrants at risk, but invading the privacy of its customers by allowing a third-party to hold onto their data indefinitely.
We will update this post if and when the Irvine Company decides to respond to our questions.
Special thanks to Zoe Wheatcroft, the EFF volunteer who first spotted The Irvine Company's ALPR policy.
EFF was founded on this day, exactly 28 years ago. Since that time, EFF’s logo has remained more or less unchanged. This helped us develop a consistent identity — people in the digital rights world instantly recognize our big red circle and the heavy black “E.” But the logo has some downsides. It’s hard to read, doesn’t say much about our organization, and looks a bit out of date.
Today, we are finally getting around to a new look for our organization thanks to the generosity of top branding organization Pentagram ! We’ve launched a new logo nicknamed “Insider,” and it was created for us by Pentagram under the leadership of the amazing Michael Bierut.
To celebrate, we’re releasing our new EFF member shirt, featuring the new logo. It’s a simple black shirt with the logo in bright red and white. Join us or renew your membership and get a new shirt today!
There’s a good story behind how this new logo came about.
Last year, EFF defended Kate Wagner, the blogger behind McMansion Hell, a popular blog devoted to the many flaws and failures of so-called “McMansions,” those oversized suburban tract homes that many people love to hate. The online real estate database Zillow objected to Wagner's use of their photos, and threatened her with legal action.
EFF stepped in to defend Wagner. EFF Senior Staff Attorney Daniel Nazer sent a letter on the blogger’s behalf, explaining that her use of Zillow’s images was a textbook example of fair use. Zillow backed down, and her many supporters let out a collective cheer.
One of those supporters was Michael Bierut, who also happens to be one of the best logo designers on the planet. (You have probably seen some of his work: among his many recognizable works are logos for MIT’s Media Lab, MasterCard, and Hillary Clinton.) Bierut said he loved EFF's letter, recognized it as great legal writing, and also saw that EFF needed a new logo. He and his team at Pentagram offered to make us a new one, pro bono.
We were really touched and pleased by his offer. Over subsequent months, we worked with Bierut and his team to come up with something new. In describing what we were looking for, we told Pentagram that we wanted something simple, classic, and that matched the boldness of our vision for the Internet.
After several rounds and revisions, they came up with this new logo, Insider. One of the great things about this logo is that, in true Pentagram fashion, this logo is really a logo system. As seen in the video above, the logo can be reconfigured and adjusted in multiple ways, allowing us to adjust our look for many purposes. This logo will look as good on a formal legal letter as it does in an activist campaign. It also uses a great open source typeface called League Gothic!
We hope you like the new logo as much as we do—and that when you see it, wear it, or display it, it continues to convey our history of working for your online rights, and our plan to keep up the fight long into the future.
Join EFF AND GET OUR NEW LOGO T-SHIRT
A happy ending, shared with Kate Wagner and Michael Bierut’s consent.
After a hearing that stripped California’s gold standard net neutrality bill of much of its protections, California legislators have negotiated new amendments that restore the vast majority of those protections to the bill. The big ISPs and their money did not defeat the voices of the many, many people who want and need a free and open Internet.
On June 20, the Communications and Conveyance Committee of the California Assembly, after having rejected proposed amendments to move Senator Scott Wiener’s S.B. 822 and Senator Kevin de León’s S.B. 460 forward as a package, also voted to gut S.B. 822's strong net neutrality protections. It was a move that resulted in a hollowed-out version of S.B. 822 that left huge loopholes for ISPs.
Since then, there’s been an outcry from Team Internet in California, making clear how important effective, strong net neutrality protections are. Senator Wiener, Senator de León, Assemblymember Rob Bonta, and Assemblymember Miguel Santiago, the Chair of the Assembly Committee on Communications and Conveyance that voted on the watered-down bill, have all come to an agreement that once again makes California’s proposed legislation the strongest net neutrality bill in the country.
The willingness of Assemblymember Santiago to listen to his constituents’ opinions and realize their needs, as opposed to those of large ISPs like AT&T, is laudable. And the resulting agreement puts California net neutrality back on track.
As was initially proposed by Senator Wiener and Senator de Leon, both net neutrality bills will now become a package. The ban on blocking, throttling, and paid prioritization remains—paid prioritization has been a particular target of misleading ISP arguments. The ban on certain kinds of zero rating—the kinds that lead consumers to services that ISPs want them to use rather than giving them choices—also remains. And so does the ban on access fees, which means ISPs will not be able to get around these protections by charging fees at the places where data enters their networks.
This is what real net neutrality looks like. And it all happened because people spoke out. You sent emails, called offices, crowdfunded a billboard—all of that was heard. People’s voices trumped company money this time.
The fight’s not over: these bills still need to be passed by the California legislature and signed by the governor. So keep telling them to vote for S.B. 822.
Tell California Assemblymembers to Vote Yes on S.B. 822
Big companies are harvesting and monetizing your face print, fingerprints, and other sensitive biometric information, without your knowledge and consent. That’s why Illinois wisely enacted the Biometric Information Privacy Act (BIPA), which prohibits companies from gathering, using, or sharing your biometric information without your informed opt-in consent. Now companies are asking the Illinois Supreme Court to defang BIPA, by narrowly interpreting its enforcement tool and thus depriving injured parties of their day in court.
EFF has joined an amicus curiae brief urging the Illinois Supreme Court to adopt a robust interpretation of BIPA. Our fellow amici are ACLU, CDT, the Chicago Alliance Against Sexual Exploitation, PIRG, and Lucy Parsons Labs. In the case on appeal, Rosenbach v. Six Flags, an adolescent who purchased a season pass to an amusement park alleges the park scanned and stored his thumbprint biometrics without written consent or notice about its plan to collect, store, and use his biometric information.
The Illinois Supreme Court will decide the effectiveness of BIPA’s enforcement tool. BIPA provides that “any person aggrieved by a violation of this Act” may file their own lawsuit against the company that violated the Act. The question before the court is whether a person is “aggrieved,” and may sue, based solely on the collection of their biometric information without their informed opt-in consent, or whether a person must also show some additional injury.
EFF and our fellow amici argue that a person is “aggrieved,” and may sue, based just on capture of their biometric information without notice and informed consent. We offer several reasons. First, biometric surveillance is a growing menace to our privacy. Our biometric information can be harvested at a distance and without our knowledge, and we often have no ability as individuals to effectively shield ourselves from this grave privacy intrusion. Second, BIPA follows in the footsteps of a host of other privacy laws that prohibit the capture of private information absent informed opt-in consent, and that define capture without notice and consent by itself as an injury. Third, allowing private lawsuits is a necessary means to ensure effective enforcement of privacy laws.
Perhaps most importantly, more businesses than ever are capturing and monetizing our biometric information. Retailers use face recognition to surveil shoppers’ behavior as they move about the store, and to identify potential shoplifters. Employers use fingerprints, iris scans, and face recognition to manage employee access to company phones and computers. People have filed BIPA lawsuits against major technology companies like Facebook, Google, and Snapchat, alleging the companies applied face recognition to their uploaded photographs without their consent. The U.S. Chamber of Commerce recently filed an amicus brief in one of these lawsuits, urging a federal appellate court to gut BIPA.
Illinois’ BIPA is the strongest biometric privacy law in the United States. EFF and other privacy groups for years have resisted big business efforts to gut BIPA through the legislative process. Now we are proud to join our privacy allies in an amicus brief before the Illinois Supreme Court to push back against the latest effort to weaken BIPA.
It’s World Cup time. That means goals. And goals means goal celebrations. Here’s a compilation of U.S. soccer fans celebrating a last-second goal in the 2010 World Cup. Ah, memories. Anyway, FIFA apparently doesn’t like it when fans celebrate near their television sets. It sent a takedown notice aimed at a five-second video of a young boy celebrating in his living room.
Following a goal in the England-Tunisia match, Kathryn Conn posted a five-second video of her seven-year-old son celebrating. Conn explained that her son “is a massive Spurs fan and he absolutely worships Harry Kane so he started dancing around in the living room.” Unfortunately, the dancing occurred in front of a television still playing the game. And if there’s one thing FIFA is serious about, it’s their copyright.
Conn says she woke up the next morning to find the video deleted from Twitter and a notice that it was due to a DMCA takedown notice from FIFA, which apparently was worried that a blurry background shot of a soccer game in a five-second video would make people less likely to watch 2018’s most-viewed TV event in England.
Hmmm. A dancing child in a short video with copyrighted material playing incidentally in the background? We hope that it won’t take 10 years of litigation for FIFA to learn its lesson here. It should respect fair use and respect its fans.
Foreign languages have been taught, and studied, for thousands of years. People who teach languages are the last folks that should be dealing with patent threat letters—but incredibly, that’s exactly what has happened to Mihalis Eleftheriou. Hodder and Stoughton, a large British publisher, has sent a letter to Eleftheriou claiming that it has rights to a patent that covers recorded language lessons, and demanding that he stop providing online courses.
Eleftheriou teaches a variety of online classes through his Language Transfer project. The courses are simple audio files uploaded to platforms like Soundcloud and YouTube. So you can imagine his surprise when he received a letter [PDF] from Hodder and Stoughton, saying that his project infringes a U.S. patent.
Hodder and Stoughton contends that Language Transfer infringes U.S. Patent No. 6,565,358, titled “Language teaching system.” The patent essentially covers a language lesson on tape. In the patent’s words, it claims a particular sequence of “expression segments” to be played on a “recorded medium” using a “playing device.” In plain English, the “expression segments” amount to the following sequence: the teacher asks how to translate a phrase, there is a short pause, an example student attempts to answer the question, and then the teacher provides the correct answer.
At this point you might be asking yourself, wait what? How on Earth did someone get a patent on putting a language lesson on tape? Those are good questions. The answer, frankly, is that the Patent Office needs to do a much better job.
Today EFF has sent a response [PDF] to Hodder and Stoughton on Eleftheriou’s behalf. We explain that the ’358 patent is plainly invalid under the Supreme Court’s 2014 decision in Alice v. CLS Bank. That decision holds that an abstract idea does not become eligible for patent protection merely by being implemented on conventional or generic technology. The ’358 patent—which claims a sequence of human expressions on an ordinary tape—is a quintessential example of the kind of patent that fails this test. It is no more patentable than a sequence of musical notes on tape.
Our letter also explains that the ’358 patent is invalid as anticipated and obvious. Any student that has ever sat in a language class has probably heard the sequence of “expression segments” claimed in the patent. A search quickly revealed prior art. Indeed, the named inventor, Michel Thomas, was featured in a BBC documentary in 1997, more than three years before the patent application was filed. This documentary includes a number of sequences that match the patent’s claims. Hodder and Stoughton’s lawyer himself claimed that the patent would cover a recording done via “television system,” so has essentially admitted that the documentary is invalidating prior art.
Hodder and Stoughton not only demanded that Eleftheriou stop making Language Transfer course available in the United States, it also demanded that he abandon plans to publish a book about language instruction. This is an abuse of the patent system. First, Hodder and Stoughton has never even seen Eleftheriou’s book. The book will be Eleftheriou’s original work about languages and his language teaching method, and not about editing audio lessons. Second, the patent is invalid. But even more fundamentally, a patent does not allow this kind of censorship.
Hodder and Stoughton appears to be using a patent to make an end-run around the idea/expression dichotomy in copyright law. Copyright allows authors to protect particular expression (their prose), but not ideas (like building suspense via cliffhangers). In language teaching, this might play out so that someone can copyright their specific written lessons or recorded tapes, but not the idea of teaching a second language through immersion. Abstract ideas also cannot be patented. They are fundamental building blocks of knowledge, and not subject to exclusive ownership. Rather, they must remain available to future creators and inventors.
Last week, we visited Congress and presented the ’358 patent to staffers there as an example of how important it is to maintain common-sense limits on patentable subject matter. The patent lobby—in the form of the Intellectual Property Owners Association and the American Intellectual Property Law Association—wants Congress to undo Alice through legislation. These groups are pushing to change the law so that everything is eligible for patent protection unless it is “solely in the human mind.” The ’358 patent shows what a disaster such legislation would be. It could make the patent system a kind of “super copyright” where people can monopolize ideas just by putting them on tape.
Eleftheriou will continue to offer free language courses to people within the United States. We hope Hodder and Stoughton comes to its senses and abandons its absurd demands.
For many years, EFF has urged technology companies and legislators to do a better job at protecting the privacy of technology users and other members of the public. We hoped the companies, particularly mature players, would realize the importance of implementing meaningful privacy protections. But this year’s Cambridge Analytica scandal, following on the heels of many others, was the last straw. Corporations are willfully failing to respect the privacy of technology users, and we need new approaches to give them real incentives to do better—and that may include updating our privacy laws.
To be clear, any new regulations must be judicious and narrowly tailored, avoiding tech mandates and expensive burdens that would undermine competition—already a problem in some tech spaces. To accomplish that, policymakers must start by consulting with technologists as well as lawyers. After the passage of SESTA/FOSTA, we know Congress can be insensitive about the potential consequences of the rules it embraces. Looking to experts would help.
Just as importantly, new rules must also take care not to sacrifice First Amendment protections in the name of privacy protections; for example, EFF opposes the “right to be forgotten,” that is, laws that force search engines to de-list publicly available information. Finally, one size does not fit all: as we discuss in more detail below, new regulations should acknowledge and respect the wide variety of services and entities they may affect. Rules that make sense for an ISP may not make sense for an open-source project, and vice versa.
With that in mind, policymakers should focus on the following: (1) addressing when and how online services must acquire affirmative user consent before collecting or sharing personal data, particularly where that data is not necessary for the basic operation of the service; (2) creating an affirmative “right to know,” so users can learn what data online services have collected from and about them, and what they are doing with it; (3) creating an affirmative right to “data extraction,” so users can get a complete copy of their data from a service provider; and (4) creating new mechanisms for users to hold companies accountable for data breaches and other privacy failures.
But details matter. We offer some below, to help guide lawmakers, users, and companies alike in properly advancing user privacy without intruding on free speech and innovation.Opt-in Consent to Online Data Gathering
Technology users interact with many online services. The operators of those services generally gather data about what the users are doing on their websites. Some operators also gather data about what the users are doing on other websites, by means of tracking tools. They may then monetize all of this personal data in various ways, including but not limited to targeted advertising, and selling the bundled data—largely unbeknownst to the users that provided it.
New legislation could require the operator of an online service to obtain opt-in consent to collect, use, or share personal data, particularly where that collection, use, or transfer is not necessary to provide the service. The request for opt-in consent should be easy to understand and clearly advise the user what data the operator seeks to gather, how the operator will use it, how long the operator will keep it, and with whom the operator will share it. The request should be renewed any time the operator wishes to use or share data in a new way, or gather a new kind of data. And the user should be able to withdraw consent, including for particular purposes.
Some limits are in order. For example, opt-in consent might not be required for a service to take steps that the user has requested, like collect a user's mailing address in order to ship them the package they ordered. But the service should always give the user clear notice of the data collection and use, especially when the proposed use is not part of the transaction, like renting the shipping address for junk mail.
Finally, there is a risk that extensive and detailed opt-out requirements can lead to “consent fatigue.” Any new regulations should encourage entities seeking consent to explore new ways of obtaining meaningful consent to avoid that fatigue. At the same time, research suggests companies are becoming skilled at manipulating consent, steering users to share personal data.Right to Know About Data Gathering and Sharing
Users should have an affirmative “right to know” what personal data companies have gathered about them, where they got it, and with whom these companies have shared it (including the government).
Again, some limits are in order to ensure that the right to know doesn’t impinge on other important rights and privileges. For example, there needs to be an exception for news gathering, which is protected by the First Amendment, when undertaken by professional reporters and lay members of the public alike. Thus, if a newspaper tracked visitors to its online edition, the visitors’ right-to-know could cover that information, but not extend to a reporter’s investigative file.Data Extraction
In general, users should have a legal right to extract a copy of the data they have provided to an online service. People might use this copy in myriad ways, such as self-publishing their earlier comments on social media. Also, this copy might help users to better understand their relationship with the service provider.
In some cases, it may be possible for users to take this copy of their extracted data to a rival service. For example, if a user is dissatisfied with their photo storage service, they could extract a copy of their photos (and associated data) and take it to another photo storage system. In such cases, data portability may promote competition, and hopefully over time will improve services.
However, this right to extraction may need limits for certain services, such as social media, where various users’ data is entangled. For example, suppose Alice posts a photo of herself on social media, under a privacy setting that allows only certain people to see the photo, and Bob (one of those people) posts a comment on the photo. If Bob seeks to extract a copy of the data he provided to that social media, he should get his comment, but might not necessarily also get Alice’s photo.Data Breach
Many kinds of organizations gather sensitive information about large numbers of people, yet fail to securely store it. As a result, such data is often leaked, misused, or stolen. What is worse, some organizations fail to notify and assist the injured parties. Victims of data breaches often suffer financial and non-financial harms for years to come.
There are many potential fixes, some easier than others. An easy one: it should be simple and fast to get a credit freeze from a credit reporting agency, which will help prevent any credit fraud following a data breach.
Also, where a company fails to adopt basic security practices, it should be easier for people harmed by data breaches—including those suffering non-financial harms—to take those companies to court.Considerations When Drafting Any Data Privacy Law
- One Size Does Not Fit All: Policymakers must take care that any of the above requirements don’t create an unfair burden for smaller companies, nonprofits, open source projects, and the like. To avoid that, they should consider tailoring new obligations based on size and purpose of the service in question. For example, policymakers might take account of the entity’s revenue, the number of people employed by the entity, or the number of people whose data the entity collects, among other factors.
- Private Causes of Action: Policymakers should consider whether to include one of the most powerful enforcement tools: Giving ordinary people the ability to take violators to court.
- Independent Audits: Policymakers should consider requiring periodic independent privacy audits. Audits are not a panacea, and policymakers should attend to the issues raised here.
- Data Collection Is Complicated: Policymakers should consult with data experts so they understand what data can be collected and used, under what circumstances.
- Preemption Should Not Be Used To Undermine Better State Protections: There are many benefits to having a uniform standard, rather than forcing companies to comply with 50 different state laws. That said, policymakers at the federal level should take care not to allow weak national standards to thwart better state-level regulations.
- Waivers: Too often, users gain new rights only to effectively lose them when they “agree” to terms of service and end user license agreements that they haven’t read and aren’t expected to read. Policymakers should consider whether and how the rights and obligations they create can be waived, especially where users and companies have unequal bargaining power, and the “waiver” takes the form of a unilateral form contract rather than a fully negotiated agreement. We should be especially wary of mandatory arbitration requirements given that mandatory arbitration is often less protective of users than a judicial process would be.
- No New Criminal Bans: Data privacy laws should not expand the scope or penalties of computer crime laws. Existing computer crime laws are already far too broad.
No privacy law will solve all privacy problems. And every privacy bill must be carefully scrutinized to ensure that it plugs existing gaps without inadvertently stifling free speech and innovation.
In April, Mexican federal police arrested Keith Raniere, taking him from the $10,000-per-week villa where he was staying and extraditing him to New York. According to the NY Daily News, Raniere, leader of self-help group NXIVM (pronounced “nexium”), is now being held without bail while he awaits trial on sex-trafficking charges. Through NXIVM, he preached “empowerment,” but critics say the group was a cult, and engaged in extreme behavior, including branding some women with an iron.
This was not the first controversial program Raniere was involved in. In 1992, Raniere ran a multilevel marketing program called “Consumer Buyline,” which was described as an “illegal pyramid,” by the Arkansas Attorney General’s office. More recently, he has collected more than two dozen patents from the U.S. Patent Office, and has more applications pending—including this one, which is for a method of determining “whether a Luciferian can be rehabilitated.”
The USPTO has granted Raniere protection for a variety of curious inventions, including a patent on “analyzing resonance,” which eliminates unwanted frequencies in anything from musical instruments to automobiles. Raniere also received a patent on a virtual currency system, which he dubbed an “entrance-exchange structure and method.” He applied for a patent on a method of “active listening,” and received patents on a system for finding a lost cell phone, and a way of preventing a motor vehicle from running out of fuel. NXIVM members reportedly identified their levels with various colored sashes, which helps explain Raniere’s design patent on a “rational inquiry sash.”
Today, we’re going to focus on Raniere’s U.S. Patent No. 9,421,447, a “method and apparatus for improving performance.” The patent simply adds trivial limitations to the basic functioning of a treadmill, like timing the user and recording certain parameters (speed, heart rate, or turnover rate.) Since most modern treadmills allow users to precisely measure performance on a variety of metrics, the patent is arguably broad enough that it could be used to sue treadmill manufacturers or sellers.
Given Raniere’s litigation history, that’s not such a remote possibility. NXIVM has sued its critics for defamation—enough that the Albany Times-Union called NIXVM a “Litigation Machine.” And Raniere sued both AT&T and Microsoft for infringement of some patents relating to video conferencing. The latter suit ended very badly for Raniere, who was ordered to pay attorneys’ fees after he couldn’t prove that he still had ownership of the patents in question. So it’s worth taking a look at how Raniere got the ‘447 patent.
Raniere’s Law ™
Raniere has never been shy about proclaiming how special he is. His bio on a website for Executive Success Programs, a series of courses run by NXIVM, explains that he could “construct full sentences and questions” by the age of one, and read by the age of two. Raniere was an East Coast Judo Champion at age 11, recruits are told, and he entered college at Rensselaer Polytechnic Institute by age 16. The honorifics continue:
He has an estimated problem-solving rarity of one in 425,000,000 with respect to the general population. He has intellectual patents pending in the areas of human potential and ethics, expression, voice and musical training, athletic performance, commerce, education and learning, information processing and human modeling. He also holds several technological patents on computer inventions and a sleep guidance system.
Raniere may be able to convince NXIVM followers that he is a one-in-425 million level genius. A new article from Vanity Fair explains that, inside NXIVM, Raniere’s patents were often used as evidence of his brilliance. But how did Raniere convince the US Patent and Trademark Office of his inventing abilities?
Ultimately, he didn’t really have to. Taking a close look at the history of Raniere’s patent application shows how the deck is stacked in favor of a determined, well-funded applicant. For someone who’s determined to prove they’re a great inventor, and is reasonably well-funded, the patent office can ultimately be cowed into compliance.
In this case, Raniere’s original patent application claimed a “performance system” with a “control system” and a sensor for monitoring “at least one parameter.” His examples went beyond exercise: he intended to patent humans making mathematical calculations at increasing speed, or a weightlifter decreasing the time between repetitions.
Appropriately, the examiner rejected all 13 of his proposed claims. But nothing stops patent applicants from coming back and trying again—and again—and that’s exactly what Raniere did. To his bare-bones description of a “performance system” he added this dose of jargon:
Wherein said control system includes a device to determine a point of efficiency, said point of efficiency occurring when the linear proportional rate of change in  at least one parameter of the subject being trained varies rapidly outside of the state of accommodation and the range of tolerance.
Whew! That’s a lot of verbiage just to explain that the same “performance system” is measuring how fast changes occurs. The patent would be infringed by any treadmill that could measure a changing variable. Even though earlier patents had described essentially the same thing—Raniere’s lawyers insisted that his idea of measure the “rate of change” was “completely different” from a system that used a “precalculated range.”
The examiner rejected Raniere’s application again, noting that an older patent for an exercise bike attached to a video game still fulfilled all the elements of Raniere’s new, jargon-filled patent.
But Raniere simply paid $470 to file a “request for continued examination,” and kept pounding his fist on the proverbial table. Raniere, or his lawyers, bloated Claim 1 up with yet more language about the point of efficiency occurring “just prior to the subject no longer being able to accommodate additional stress” and entering a state of exhaustion, and claimed now that it was this more narrow description that was his stroke of genius.
“Nowhere in [earlier patent] Hall-Tipping is it suggested that the user be exercised to the point of exhaustion,” pointed out Raniere’s lawyers, this time around.
Rejected again, they had an interview with the examiner before coming back with yet another $470 “continued examination” request. Then Raniere loaded up Claim 1 with almost twice as much language about the system repeating itself, and re-measuring new “points of efficiency.”
This went on and on [PDF], with Raniere continuing to change language and add limitations. Eight times, the examiner threw out every single one of his claims. Finally, after he added language about the “range of tolerance” being plus or minus two percent, his claims were allowed.
In his specification, Raniere was typically un-self-effacing. He crowed that he had created “Raniere’s Maximal Efficiency Principle™” or “Raniere’s Law™.” (The guy is clearly into branding.)
Unfortunately, this is par for the course. Determined patent applicants get an endless number of chances to create a piece of intellectual property that just barely avoids all the other patents and non-patent art that overworked patent examiners are able to find. The strategy is: find a basic process, and slowly add limitations until you get a patent. That’s how we get patents on filming a yoga class and Amazon’s patent on white-background photography. The fault lies not so much with the examiner here, but with the Federal Circuit for interpreting patent law’s obviousness standard in a way that effectively prohibits the Patent Office from relying on common sense.
So what’s the solution? We need the Federal Circuit to apply the Supreme Court’s decision in KSR v Teleflex more faithfully and allow the Patent Office to use common sense when faced with mundane claims. We also need to defend the Alice v. CLS Bank ruling so that examiners can reject patents that claim abstract ideas implemented with conventional tools (like treadmills). Patent law should also be changed so that applicants don’t get an endless number of bites at the apple.
As we reported last week, JURI, the key European Parliamentary committee working on copyright reform, voted on June 20th to support compulsory copyright filters for media platforms (Article 13), and to create a new requirement on websites to obtain a license before linking to news stories (Article 11).
That vote marked the last chance for the democratically-elected members of the European Parliament (MEPs) to submit fixes to the directive — under the usual procedures. But this is not an ordinary regulation, and there still remains a couple of unusual procedures that would let them throw out these disastrous two articles.
The first of these happens next week. Generally, the text agreed by the JURI committee would immediately become the basis of a "Trilogue" negotiation between the Parliament, the European Commission (the EU's executive) and the European Council (representatives of its member states). What comes out of that negotiation becomes EU law — and with the JURI vote, all three groups have agreed to include copyright filters and link taxes in the final directive.
However, given the controversy over the directive's contents, we expect some MEPs will invoke "Rule 69(c)" next week. That would lead to a vote of the full Parliament on the JURI text as a negotiating mandate, probably on July 5th.
As Julia Reda, the Pirate Party MEP explains in the interview below, with enough noise, it may well be possible to get a majority of the Parliament to oppose the JURI decision. That would re-open the directive's text, and allow ordinary MEPs to vote on amendments. Even if we don't get a majority then, it'll will be important groundwork for the next, highly unusual step: another plenary vote for on the negotiated directive some time later this year.
Privacy info. This embed will serve content from youtube-nocookie.com
Will they rise to the challenge? Most MEPs — like most Europeans — were unaware of the controversy surrounding Article 11 and 13 until this month. Right now, they're being heavily lobbied by the regulations' supporters: but they're also hearing from thousands of their constituents.
As Reda says, do a little research on where your MEP stands on the issues. Right now, progressive MEPs are being told that Article 13 and 11 will teach the big tech companies a lesson and defeat fake news (no, we still don't understand that one); and conservative MEPs are being told that Europe's businesses support these new property rights. These arguments are deeply misleading — Google, Facebook, Apple and other giants are rarely happy with new regulations, but they'd be able to comply quickly and easily with Article 13 and 11, unlike any emerging competitors who would have no negotiating powers to gain new licenses or build copyright scanning tools. And while it's true that the multinational media and rightsholder conglomerates have pushed for the link tax and the copyright filters, there are many other businesses and non-profit groups that would be caught in the directive's filtering and licensing net.
You don't have to be on the right or the left to decry this directive: the primary community it will affect doesn't have lobbyists in Brussels. They're just ordinary Internet and digital technology users and creators. Europe's lawmakers need to understand that the details of digital copyright are more than just a deal to be brokered between commercial giants — they're a matter of free expression, privacy, and human rights.
That's why the United Nation's human rights experts oppose these articles; why Wikipedia and Creative Commons are fighting it; why the Internet's pioneering technologists and creators told the EU to think again. And that's why your MEP needs to hear from you.
Call now at saveyourinternet.eu.
San Francisco – Two human rights organizations, a digital library, an activist for sex workers, and a certified massage therapist have filed a lawsuit asking a federal court to block enforcement of FOSTA, the new federal law that silences online speech by forcing speakers to self-censor and requiring platforms to censor their users. The plaintiffs are represented by the Electronic Frontier Foundation (EFF), Davis, Wright Tremaine LLP, Walters Law Group, and Daphne Keller.
In Woodhull Freedom Foundation et al. v. United States, the plaintiffs argue that FOSTA is unconstitutional, muzzling online speech that protects and advocates for sex workers and forces well-established, general interest community forums offline for fear of criminal charges and heavy civil liability for things their users might share.
FOSTA, or the Allow States and Victims to Fight Online Sex Trafficking Act, was passed by Congress in March. But instead of focusing on the perpetrators of sex trafficking, FOSTA goes after online speakers, imposing harsh penalties for any website that might “facilitate” prostitution or “contribute to sex trafficking.” The vague language and multiple layers of ambiguity are driving constitutionally protected speech off the Internet at a rapid pace.
For example, plaintiff the Woodhull Freedom Foundation works to support the health, safety, and protection of sex workers, among other things. Woodhull wanted to publish information on its website to help sex workers understand what FOSTA meant to them. But instead, worried about liability under FOSTA, Woodhull was forced to censor its own speech and the speech of others who wanted to contribute to their blog. Woodhull is also concerned about the impact of FOSTA on its upcoming annual summit, scheduled for next month.
“FOSTA chills sexual speech and harms sex workers,” said Ricci Levy, executive director Woodhull Freedom Foundation. “It makes it harder for people to take care of and protect themselves, and, as an organization working to protect people’s fundamental human rights, Woodhull is deeply concerned about the damaging impact that this law will have on all people.”
FOSTA calls into serious question the legality of online speech that advocates for the decriminalization of sex work, or provides health and safety information to sex workers. Human Rights Watch (HRW), an international organization that is also a plaintiff, advocates globally for ways to protect sex workers from violence, health risks, and other human rights abuses. The group is concerned that its efforts to expose abuses against sex workers and decriminalize voluntary sex work could be seen as “facilitating” “prostitution,” or in some way assisting sex trafficking.
“HRW relies heavily on individuals spreading its reporting and advocacy through social media,” said Dinah Pokempner, HRW General Counsel. “We are worried that social media platforms and websites may block the sharing of this information out of concern it could be seen as demonstrating a “reckless disregard” of sex trafficking activities under FOSTA. This law is the wrong approach to the scourge of sex trafficking.”
But FOSTA doesn’t just impede the work of sex educators and activists. It also led to the shutdown of Craigslist’s “Therapeutic Services” section, which has imperiled the business of a licensed massage therapist who is another plaintiff in this case. The Internet Archive joined this lawsuit against FOSTA because the law might hinder its work of cataloging and storing 330 billion web pages from 1996 to the present.
Because of the critical issues at stake, the lawsuit filed today asks the court to declare that FOSTA is unconstitutional, and asks that the government be permanently enjoined from enforcing the law.
“FOSTA is the most comprehensive censorship of Internet speech in America in the last 20 years,” said EFF Civil Liberties Director David Greene. “Despite good intentions, Congress wrote an awful and harmful law, and it must be struck down.”
For the full complaint in Woodhull v. United States:
For more on FOSTA:
We are asking a court to declare the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (“FOSTA”) unconstitutional and prevent it from being enforced. The law was written so poorly that it actually criminalizes a substantial amount of protected speech and, according to experts, actually hinders efforts to prosecute sex traffickers and aid victims.
In our lawsuit, two human rights organizations, an individual advocate for sex workers, a certified non-sexual massage therapist, and the Internet Archive, are challenging the law as an unconstitutional violation of the First and Fifth Amendments. Although the law was passed by Congress for the worthy purpose of fighting sex trafficking, its broad language makes criminal of those who advocate for and provide resources to adult, consensual sex workers and actually hinders efforts to prosecute sex traffickers and aid victims.
EFF strongly opposed FOSTA throughout the legislative process. During the months-long Congressional debate on the law we expressed our concern that the law violated free speech rights and would do heavy damage to online freedoms. The law that was ultimately passed by Congress and signed into law by President Trump was actually the most egregiously bad of those Congress had been considering.What FOSTA Changed
FOSTA made three major changes to existing law. The first two involved changes to federal criminal law:
- First, it created an entirely new federal crime by adding a new section to the Mann Act. The new law makes it a crime to “own, manage or operate” an online service with the intent to “promote or facilitate” “the prostitution of another person.” That crime is punishable by up to 10 years in prison. The law further makes it an “aggravated offense,” punishable by up to 25 years in prison and also subject to civil lawsuits if “facilitation” was of the prostitution of 5 or more persons, or if it was done with “reckless disregard” that it “contributed to sex trafficking.” An aggravated violation may also be the basis for an individual’s civil lawsuit. The prior version of the Mann Act only made it illegal to physically transport a person across state lines for the purposes of prostitution.
- Second, FOSTA expanded existing federal criminal sex trafficking law. Before SESTA, the law made it a crime to knowingly advertise sexual services of a minor or any person doing so only under force, fraud, or coercion, and also criminalized several other modes of conduct. The specific knowledge requirement for advertising (that one must know he advertisement was for sex trafficking) was an acknowledgement that advertising was entitled to some First Amendment protection. The prior law additionally made it a crime to financially benefit from “participation in a venture” of sex trafficking. FOSTA made seemingly a small change to the law: it defined “participation in a venture” extremely broadly to include “assisting, supporting, or facilitating.” But this new very broad language has created great uncertainty about liability for speech other than advertising that someone might interpret as “assisting” or “supporting” sex trafficking, and what level of awareness of sex trafficking the participant must have.
As is obvious, these expansions of the law are fraught with vague and ambiguous terms that have created great uncertainty about what kind of online speech is now illegal. FOSTA does not define “facilitate”, “promote”, “contribute to sex trafficking,” “assisting,” or supporting” – but the inclusion of all of these terms shows that Congress intended the law to apply expansively. Plaintiffs thus reasonably fear it will be applied to them. Plaintiffs Woodhull Freedom Foundation and Human Rights Watch advocate for the decriminalization of sex work, both domestically and internationally. It is unclear whether that advocacy is considered “facilitating” prostitution under FOSTA. Plaintiffs Woodhull and Alex Andrews offer substantial resources online to sex workers, including important health and safety information. This protected speech, and other harm reduction efforts, can also be seen as “facilitating” prostitution. And although each of the plaintiffs vehemently opposes sex trafficking, Congress’s expressed sense in passing the law was that sex trafficking and sex work were “inextricably linked.” Thus, plaintiffs are legitimately concerned that their advocacy on behalf of sex workers will be seen as being done in reckless disregard of some “contribution to sex trafficking,” even though all plaintiffs vehemently oppose sex trafficking.
The third change significantly undercut the protections of one of the Internet’s most important laws, 47 U.S.C. § 230, originally a provision of the Communications Decency Act, commonly known simply as Section 230 or CDA 230:
- FOSTA significantly undermined the legal protections intermediaries had under 42 U.S.C. § 230, commonly known simply as Section 230. Section 230 generally immunized intermediaries form liability arising from content created by others—it was thus the chief protection that allowed Internet platforms for user-generated content to exist without having to review every piece of content appearing posted to them for potential legal liability. FOSTA undercut this immunity in three significant ways. First, Section 230 already had an exception for violations of federal criminal law, so the expansion of criminal law described above also automatically expanded the Section 230 exception. Second, FOSTA nullified the immunity also for state criminal lawsuits for violations of state laws that mirror the violations of federal law. And third, FOSTA allows for lawsuits by individual civil litigants.
The possibility of these state criminal and private civil lawsuit is very troublesome. FOSTA vastly magnifies the risk an Internet host bears of being sued. Whereas federal prosecutors typically carefully pick and choose which violations of law they pursue, the far more numerous state prosecutors may be more prone to less selective prosecutions. And civil litigants often do not carefully consider the legal merits of an action before pursing it in court. Past experience teaches us that they might file lawsuits merely to intimidate a speaker into silence – the cost of defending even a meritless lawsuit being quite high. Lastly, whereas with federal criminal prosecutions, the US Department of Justice may offer clarifying interpretations of a federal criminal law that addresses concerns with a law’s ambiguity, those interpretations are not binding on state prosecutors and the millions of potential private litigants.FOSTA Has Already Censored The Internet
As a result of these hugely increased risks of liability, many platforms for online speech have shuttered or restructured. The following as just two examples:
- Two days after the Senate passed FOSTA, Craigslist eliminated its Personals section, including non-sexual subcategories such as “Missed Connections” and “Strictly Platonic.” Craigslist attributed this change to FOSTA, explaining “Any tool or service can be misused. We can’t take such risk without jeopardizing all our other services, so we are regretfully taking craigslist personals offline. Hopefully we can bring them back some day.” Craigslist also shut down its Therapeutic Services section and will not permit ads that were previously listed in Therapeutic Services to be re-listed in other sections, such as Skilled Trade Services or Beauty Services.
- VerifyHim formerly maintained various online tools that helped sex workers avoid abusive clients. It described itself as “the biggest dating blacklist database on earth.” One such resource was JUST FOR SAFETY, which had screening tools designed to help sex workers check to see if they might be meeting someone dangerous, create communities of common interest, and talk directly to each other about safety. Following passage of FOSTA, VerifyHim took down many of these tools, including JUST FOR SAFETY, and explained that it is “working to change the direction of the site.”
Plaintiff Eric Koszyk is a certified massage therapist running his own non-sexual massage business as his primary source of income. Prior to FOSTA he advertised his services exclusively in Craigslist’s Therapeutic Services section. That forum is no longer available and he is unable to run his ad anywhere else on the site, thus seriously harming his business. Plaintiff the Internet Archive fears that it can no longer rely on Section 230 to bar liability for content created by third parties and hosted by the Archive, which comprises the vast majority of material in the Archive’s collection, on account of FOSTA’s changes to Section 230. The Archive is concerned that some third-party content hosted by the Archive, such as archives of particular websites, information about books, and the books themselves, could be construed as promoting or facilitating prostitution, or assisting, supporting, or facilitating sex trafficking under FOSTA’s expansive terms. Plaintiff Alex Andrews maintains the website RateThatRescue.org, a sex worker-led, public, free, community effort to share information about both the organizations and services on which sex workers can rely, and those they should avoid. Because the site is largely user-generated content, Andrews relies on Section 230’s protections. She is now concerned that FOSTA now exposes her to potentially ruinous civil and criminal liability. She has also suspended moving forward with an app that would offer harm reduction materials to sex workers. Human Rights Watch relies heavily on individuals spreading its reporting and advocacy through social media. It is concerned that social media platforms and websites that host, disseminate, or allow users to spread their reports and advocacy materials may be inhibited from doing so because of FOSTA.
And many many others are experiencing the same uncertainty and fears of prosecution that are plaguing other advocates, service providers, platforms, and platform users since FOSTA became law.
We have asked the court to preliminarily enjoin enforcement of the law so that the plaintiffs and others can exercise their First Amendment rights until the court can issue a final ruling. But there is another urgent reason to halt enforcement of the law. Plaintiff Woodhull Freedom Foundation is holding its annual Sexual Freedom Summit August 2-, 2018. Like past years, the Summit features a track on sex work, this year titled “Sex as Work,” that seeks to advance and promote the careers, safety, and dignity of individuals engaged in professional sex work. In presenting and promoting the Sexual Freedom Summit, and the Sex Work Track in particular, Woodhull operates and uses interactive computer services in numerous ways: Woodhull uses online databases and cloud storage services to organize, schedule and plan the Summit; Woodhull exchanges emails with organizers, volunteers, website developers, promoters and presenters during all phases of the Summit; Woodhull has promoted the titles of all workshops on its Summit website; Woodhull also publishes the biographies and contact information for workshop presenters on its website, including those for the sex workers participating in the Sex Work Track and other tracks. Is publishing the name and contact information for a sex worker “facilitating the prostitution of another person”? If it is, FOTSA makes it a crime.
Moreover, most, if not all, of the workshops are also promoted by Woodhull on social media such as Facebook and Twitter; and Woodhull wishes to stream the Sex Work Track on Facebook, as it does other tracks, so that those who cannot attend can benefit from the information and commentary.
Without an injunction, the legality under FOSTA of all of these practices is uncertain. The preliminary injunction is necessary so that Woodhull can conduct the Sex as Work track without fear of prosecution.
It is worth emphasizing that Congress was repeatedly warned that it was passing a law that would censor far more speech than was necessary to address the problem of sex trafficking, and that the law would indeed hinder law enforcement efforts and pose great dangers to sex workers. During the Congressional debate on FOSTA and SESTA, anti-trafficking groups such as Freedom Network and the International Women’s Health Coalition issued statements warning that the laws would hurt efforts to aid trafficking victims, not help them.
Even Senator Richard Blumenthal, an original cosponsor of the SESTA (the Senate bill) criticized the new Mann Act provision when it was proposed in the House bill, telling Wired “there is no good reason to proceed with a proposal that is opposed by the very survivors it claims to support.” Nevertheless, Senator Blumenthal ultimately voted to pas FOSTA.
In support of the preliminary injunction, we have submitted the declarations of several experts who confirm the harmful effects FOSTA is having on sex workers, who are being driven back to far more dangerous street-based work as online classified sites disappear, to the loss of online “bad date lists” that informed sex workers of risks associated with certain clients, to making sex less visible to law enforcement, which can no longer scour and analyze formerly public websites where sex trafficking had been advertised. For more information see the Declarations of Dr. Alexandra Lutnick, Prof. Alexandra Frell Levey, and Dr. Kimberly Mehlman-Orozco.Related Cases: Woodhull Freedom Foundation et al. v. United States