Electronic Freedom Foundation

Proctoring Tools and Dragnet Investigations Rob Students of Due Process

EFF - Thu, 04/15/2021 - 4:21pm

Like many schools, Dartmouth College has increasingly turned to technology to monitor students taking exams at home. And while many universities have used proctoring tools that purport to help educators prevent cheating, Dartmouth’s Geisel School of Medicine has gone dangerously further. Apparently working under an assumption of guilt, the university is in the midst of a dragnet investigation of complicated system logs, searching for data that might reveal student misconduct, without a clear understanding of how those logs can be littered with false positives. Worse still, those attempting to assert their rights have been met with a university administration more willing to trust opaque investigations of inconclusive data sets rather than their own students.

The Boston Globe explains that the medical school administration’s attempts to detect supposed cheating have become a flashpoint on campus, exemplifying a worrying trend of schools prioritizing misleading data over the word of their students. The misguided dragnet investigation has cast a shadow over the career aspirations of over twenty medical students.

Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating

What’s Wrong With Dartmouth’s Investigation

In March, Dartmouth’s Committee on Student Performance and Conduct (CSPC) accused several students of accessing restricted materials online during exams. These accusations were based on a flawed review of an entire year’s worth of the students’ log data from Canvas, the online learning platform that contains class lectures and information. This broad search was instigated by a single incident of confirmed misconduct, according to a contentious town hall between administrators and students (we've re-uploaded this town hall, as it is now behind a Dartmouth login screen). These logs show traffic between students’ devices and specific files on Canvas, some of which contain class materials, such as lecture slides. At first glance, the logs showing that a student’s device connected to class files would appear incriminating: timestamps indicate the files were retrieved while students were taking exams. 

But after reviewing the logs that were sent to EFF by a student advocate, it is clear to us that there is no way to determine whether this traffic happened intentionally, or instead automatically, as background requests from student devices, such as cell phones, that were logged into Canvas but not in use. In other words, rather than the files being deliberately accessed during exams, the logs could have easily been generated by the automated syncing of course material to devices logged into Canvas but not used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. Most of us don’t log out of every app, service, or webpage on our smartphones when we’re not using them.

Much like a cell phone pinging a tower, the logs show files being pinged in short time periods and sometimes being accessed at the exact second that students are also entering information into the exam, suggesting a non-deliberate process. The logs also reveal that the files accessed are largely irrelevant to the tests in question, also indicating  an automated, random process. A UCLA statistician wrote a letter explaining that even an automated process can result in multiple false-positive outcomes. Canvas' own documentation explicitly states that the data in these logs "is meant to be used for rollups and analysis in the aggregate, not in isolation for auditing or other high-stakes analysis involving examining single users or small samples." Given the technical realities of how these background refreshes take place, the log data alone should be nowhere near sufficient to convict a student of academic dishonesty. 

Along with The Foundation for Individual Rights in Education (FIRE), EFF sent a letter to the Dean of the Medical School on March 30th, explaining how these background connections work and pointing out that the university has likely turned random correlations into accusations of misconduct. The Dean’s reply was that the cases are being reviewed fairly. We disagree.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software

It appears that the administration is the victim of confirmation bias, turning fallacious evidence of misconduct into accusations of cheating. The school has admitted in some cases that the log data appeared to have been created automatically, acquitting some students who pushed back. But other students have been sanctioned, apparently entirely based on this spurious interpretation of the log data. Many others are anxiously waiting to hear whether they will be convicted so they can begin the appeal process, potentially with legal counsel. 

These convictions carry heavy weight, leaving permanent marks on student transcripts that could make it harder for them to enter residencies and complete their medical training. At this level of education, this is not just about being accused of cheating on a specific exam. Being convicted of academic dishonesty could derail an entire career. 

University Stifles Speech After Students Express Concerns Online

Worse still, following posts from an anonymous Instagram account apparently run by students concerned about the cheating accusations and how they were being handled, the Office of  Student Affairs introduced a new social media policy.

An anonymous Instagram account detailed some concerns students have with how these cheating allegations were being handled (accessed April 7). As of April 15, the account was offline.

The policy was emailed to students on April 7 but backdated to April 5—the day the Instagram posts appeared. The new policy states that, "Disparaging other members of the Geisel UME community will trigger disciplinary review." It also prohibits social media speech that is not “courteous, respectful, and considerate of others” or speech that is “inappropriate.” Finally, the policy warns, "Students who do not follow these expectations may face disciplinary actions including dismissal from the School of Medicine." 

One might wonder whether such a policy is legal. Unfortunately, Dartmouth is a private institution and so not prohibited by the First Amendment from regulating student speech.

If it were a public university with a narrower ability to regulate student speech, the school would be stepping outside the bounds of its authority if it enforced the social media policy against medical school students speaking out about the cheating scandal. On the one hand, courts have upheld the regulation of speech by students in professional programs at public universities under codes of ethics and other established guidance on professional conduct. For example, in a case about a mortuary student’s posts on Facebook, the Minnesota Supreme Court held that a university may regulate students’ social media speech if the rules are “narrowly tailored and directly related to established professional conduct standards.” Similarly, in a case about a nursing student’s posts on Facebook, the Eleventh Circuit held that “professional school[s] have discretion to require compliance with recognized standards of the profession, both on and off campus, so long as their actions are reasonably related to legitimate pedagogical concerns.” On the other hand, the Sixth Circuit has held that a university can’t invoke a professional code of ethics to discipline a student when doing so is clearly a “pretext” for punishing the student for her constitutionally protected speech.

Although the Dartmouth medical school is immune from a claim that its social media policy violates the First Amendment, it seems that the policy might unfortunately be a pretext to punish students for legitimate speech. Although the policy states that the school is concerned about social media posts that are “lapses in the standards of professionalism,” the timing of the policy suggests that the administrators are sending a message to students who dare speak out against the school’s dubious allegations of cheating. This will surely have a chilling effect on the community to the extent that students will refrain from expressing their opinions about events that occur on campus and affect their future careers. The Instagram account was later taken down, indicating that the chilling effect on speech may have already occurred. (Several days later, a person not affiliated with Dartmouth, and therefore protected from reprisal, has reposted many of the original Instagram's posts.)

Students are at the mercy of private universities when it comes to whether their freedom of speech will be respected. Students select private schools based on their academic reputation and history, and don’t necessarily think about a school’s speech policies. Private schools shouldn’t take advantage of this, and should instead seek to sincerely uphold free speech principles.

Investigations of Students Must Start With Concrete Evidence

Though this investigation wasn’t the result of proctoring software, it is part and parcel of a larger problem: educators using the pandemic as an excuse to comb for evidence of cheating in places that are far outside their technical expertise. Proctoring tools and investigations like this one flag students based on flawed metrics and misunderstandings of technical processes, rather than concrete evidence of misconduct. 

Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

Proctoring software that assumes all students take tests the same way—for example, in rooms that they can control, their eyes straight ahead, fingers typing at a routine pace—puts a black mark on the record of students who operate outside the norm. One problem that has been widely documented with proctoring software is that students with disabilities (especially those with motor impairment) are consistently flagged as exhibiting suspicious behavior by software suites intended to detect cheating. Other proctoring software has flagged students for technical snafus such as device crashes and Internet cuts out, as well as completely normal behavior that could indicate misconduct if you squint hard enough.

For the last year, we’ve seen far too many schools ignore legitimate student concerns about inadequate, or overbroad, anti-cheating software. Across the country, thousands of students, and some parents, have created petitions against the use of proctoring tools, most of which (though not all) have been ignored. Students taking the California and New York bar exams—as well as several advocacy organizations and a group of deans—advocated against the use of proctoring tools for those exams. As expected, many of those students then experienced “significant software problems” with the Examsoft proctoring software, specifically, causing some students to fail. 

Many proctoring companies have defended their dangerous, inequitable, privacy-invasive, and often flawed software tools by pointing out that humans—meaning teachers or administrators—usually have the ability to review flagged exams to determine whether or not a student was actually cheating. That defense rings hollow when those reviewing the results don’t have the technical expertise—or in some cases, the time or inclination—to properly examine them.

Similar to schools that rely heavily on flawed proctoring software, Dartmouth medical school has cast suspicion on students by relying on access logs that are far from concrete evidence of cheating. Simply put: these logs should not be used as the sole evidence for potentially ruining a student’s career. 

The Dartmouth faculty has stated that they will not continue to look at Canvas logs in the future for violations (51:45 into the video of the town hall). That’s a good step forward. We insist that the school also look beyond these logs for the students currently being investigated, and end this dragnet investigation entirely, unless additional evidence is presented.

EFF Partners with DuckDuckGo to Enhance Secure Browsing and Protect User Information on the Web

EFF - Thu, 04/15/2021 - 9:33am
DuckDuckGo Smarter Encryption Will be Incorporated Into HTTPS Everywhere

San Francisco, California—Boosting protection of Internet users’ personal data from snooping advertisers and third-party trackers, the Electronic Frontier Foundation (EFF) today announced it has enhanced its groundbreaking HTTPS Everywhere browser extension by incorporating rulesets from DuckDuckGo Smarter Encryption.

The partnership represents the next step in the evolution of HTTPS Everywhere, a collaboration with The Tor Project and a key component of EFF’s effort to encrypt the web and make the Internet echo system safe for users and website owners.

“DuckDuckGo Smarter Encryption has a list of millions of HTTPS-encrypted websites, generated by continually crawling the web instead of through crowdsourcing, which will give HTTPS Everywhere users more coverage for secure browsing,” said Alexis Hancock, EFF Director of Engineering and manager of HTTPS Everywhere and Certbot web encrypting projects. “We’re thrilled to be partnering with DuckDuckGo as we see HTTPS become the default protocol on the net and contemplate HTTPS Everywhere’s future.”

“EFFs pioneering work with the HTTPS Everywhere extension took privacy protection in a new and needed direction, seamlessly upgrading people to secure website connections,” said Gabriel Weinberg, DuckDuckGo founder and CEO. “We're delighted that EFF has now entrusted DuckDuckGo to power HTTPS Everywhere going forward, using our next generation Smarter Encryption dataset."

When EFF launched HTTPS Everywhere over a decade ago, the majority of web servers used the non-secure HTTP protocol to transfer web pages to browsers, rendering user content and information vulnerable to attacks.

EFF began building and maintaining a crowd-sourced list of encrypted HTTPS versions of websites for a free browser extension— HTTPS Everywhere—which automatically takes users to them. That keeps users’ web searching, pages visited, and other private information encrypted and safe from trackers and data thieves that try to intercept and steal personal information in transit from their browser.

Fast forward ten years­—the web is undergoing a massive change to HTTPS. Mozilla’s Firefox has an HTTPS-only mode, while Google Chrome is slowly moving towards HTTPS mode.

DuckDuckGo, a privacy-focused search engine, also joined the effort with Smarter Encryption to help users browse securely by detecting unencrypted, non-secure HTTP connections to websites and automatically upgrading them to encrypted connections.

With more domain coverage in Smarter Encryption, HTTPS Everywhere users are provided even more protection. HTTPS Everywhere rulesets will continue to be hosted through this year, giving our partners who use them time to adjust. We will stop taking new requests for domains to be added at the end of May.

To download HTTPS Everywhere:
https://www.eff.org/https-everywhere

For more on encrypting the web:
https://www.eff.org/encrypt-the-web

For more from DuckDuckGo:
https://spreadprivacy.com/eff-adopts-duckduckgo-smarter-encryption/

Contact:  AlexisHancockDirector of Engineering, Certbot alexis@eff.org DuckDuckGoPress@duckduckgo.com

HTTPS Everywhere Now Uses DuckDuckGo’s Smarter Encryption

EFF - Wed, 04/14/2021 - 8:58pm

Over the last few months the HTTPS Everywhere project has been deciding what to do with the new landscape of HTTPS in major browsers. Encrypted web traffic has increased in the last few years and major browsers have made strides in seeing that HTTPS becomes the default. This project has shepherded a decade of encrypted web traffic and we look onward to setting our efforts protecting people when new developments occur in the future. That said we’d like to announce that we have partnered with the DuckDuckGo team to utilize their Smarter Encryption rulesets into the HTTPS Everywhere web extension.

This is happening for several reasons:

  • Firefox has an HTTPS-Only Mode now.
  • Chrome doesn’t have HTTPS by default, but is slowly moving towards that goal with now directing to HTTPS in the navigation bar first before going to HTTP.
  • DuckDuckGo’s Smarter Encryption covers more domains than our current model.
  • Browsers and websites are moving away from issues that created a need for more granular ruleset maintenance.
    • Mixed content is now blocked in major browsers
    • Different domains for secure connection are now an older habit (i.e. secure.google.com), further removing the need for granular maintenance on HTTPS Everywhere rulesets
    • Chrome’s Manifest V3 declarativeNetRequest API will force the web extensions to have a ruleset cap. Instead of competing with other extensions like DuckDuckGo, if the user prefers to use HTTPS Everywhere or DuckDuckGo's Privacy Essentials, they will receive virtually the same coverage. We don’t want to create confusion for users on “who to choose” when it comes to getting the best coverage.
    • As HTTPS Everywhere goes into “maintenance mode”, users will have the opportunity to move to DuckDuckGo’s Privacy Essentials or use a browser that has HTTPS by default.

More info on DuckDuckGo’s Smarter Encryption here: https://spreadprivacy.com/duckduckgo-smarter-encryption/

Phases for HTTPS Everywhere’s Rulesets

  • DuckDuckGo Update Channel with Smarter Encryption Rulesets [April 15, 2021].
  • Still accept HTTPS Everywhere Ruleset changes in Github Repo until the end of May, 2021.
  • Still host HTTPS Everywhere Rulesets until various partners and downstream channels that use our current rulesets, make needed changes and decisions.
  • Sunset HTTPS Everywhere Rulesets [Late 2021]

Afterwards, this will start the HTTPS Everywhere web extension EoL (End of Life) stage, which will be determined later after completing the sunset of HTTPS Everywhere Rulesets. By adding the DuckDuckGo Smarter Encryption Update Channel we can give everyone time to adjust and plan.

Thank you for contributing and using this project through the years. We hope you can celebrate with us the monumental effort HTTPS Everywhere has accomplished.

Congress, Don’t Let ISP Lobbyists Sabotage Fiber for All

EFF - Wed, 04/14/2021 - 3:16pm

For the first time, an American president has proposed a plan that wouldn’t just make a dent in the digital divide, it will end it. By deploying federal resources at the level and scale this country has not seen since electrification nearly 100 years ago, the U.S. will again connect every resident to a necessary service. Like with water and electricity, robust internet access, as the pandemic has proven, is an essential service. And so the effort and resources expended are well-worth it.

The president’s plan, which matches well with current Congressional efforts of Representative James Clyburn and Senator Klobuchar, is welcomed news and a boost to efforts by Congress to get the job done. This plan draws a necessary line that government should only be investing its dollars in “future-proof” (i.e. fiber) broadband infrastructure, which is something we have failed to do for years with subsidies for persistently low metrics for what qualifies as broadband. Historically the low expectations pushed by the telecom industry have resulted in a lot of wasted tax dollars. Every state (barring North Dakota) has wasted billions in propping up outdated networks. Americans are left with infrastructure unprepared for 21st century, a terrible ratio of price to internet speed, and one of the largest private broadband provider bankruptcies in history. 

Policymakers are learning from these mistakes as well as from the demand shown by the COVID-19 crisis and this is the year we chart a new course. Now is the time for people to push their elected representatives in Congress to pass the Accessible, Affordable Internet for All Act.

Take Action

Tell Congress: Americans Deserve Fast, Reliable, and Affordable Internet

What Is “Future-Proof" and Why Does Fiber Meet the Definition?

Fiber is the superior medium for 21st-century broadband, and policymakers need to understand that reality in making decisions about broadband policy. No other data transmission medium has the inherent capacity and future potential as fiber, which is why all 21st-century networks, from 5G to Low Earth Orbit Satellites, are dependent on fiber. However, some of the broadband access industry keeps trying to sell state and federal legislatures on the need to subsidize slower, outdated infrastructure, which diverts money from going towards fiber and puts it into their pockets.

What allows for that type of grifting on a massive scale is the absence of the government mandating future proofing in its infrastructure investments. The absence of a future-proof requirement in law means the government subsidizes whatever meets the lowest speed required today without a thought about tomorrow. This is arguably one of the biggest reasons why so much of our broadband subsidies have gone to waste over the decades to any qualifying service with no thought towards the underlying infrastructure. The President’s endorsement of future-proofing broadband infrastructure is a significant and necessary course correction and it needs to be codified into the infrastructure package Congress is contemplating.

If every dollar the government spends on building broadband infrastructure needed to go to an infrastructure that met the needs of today and far into the future, there are a lot of older, slower, more expensive broadband options that are no longer eligible for government money or qualify as sufficient. At the same time, that same fiber infrastructure can be leveraged to enable dozens of services not just in broadband access, but other data-intensive services, which makes the Accessible, Affordable Internet for All Act’s preference for open access important and likely should be expanded on, given how fiber lifts all boats. One possible solution is to require government bidders to design networks around enabling wireless and wireline services instead of just one broadband service. Notably, the legislation currently lacks a future proof requirement in government investments, but given the President’s endorsement, it is our hope that it will be included in any final package that passes Congress to avoid wastefully enriching capacity-limited (and future-limited) broadband services. Building the roads to enable commerce should be how the government views broadband infrastructure.

We Need Fast Upload Speeds and Fiber, No Matter What Cable Lobbyists Try to Claim

The cable industry unsurprisingly hates the President’s proposal to get fiber to everyone because they are the high-speed monopolists for a vast majority of us and nothing forces them to upgrade other than fiber. Fiber has orders of magnitude greater capacity than cable systems and will be able to do deliver cheaper access to the high-speed era than cable systems as demand grows. In the 2020 debate on California’s broadband program, the state was deciding whether or not to  continue to subsidize DSL copper broadband, the cable lobby regularly argued that no one needs a fast upload that fiber provides because, look, no one who uses cable broadband uses a lot of upload (especially the throttled users).

There are a lot of flaws with the premise that asymmetric use of the internet is user preference, rather than the result of the current standards and cable packages. But the most obvious one is the fact that you need a market of symmetric gigabit and higher users for the technology sector to develop widely used applications and services. 

Just like with every other innovation we have seen on the internet, if the capacity is delivered, someone comes up with ways to use it. And that multi-gigabit market is coming online, but it will start in China, where an estimated 57% of all multi-gigabit users will reside by 2023 under current projections. China has in fact been laying fiber optics nine times faster than the United States since 2013 and that is a problem for American competitiveness. If we are not building fiber networks everywhere soon to catch up in the next five years, the next Silicon Valley built around the gigabit era of broadband access will be in China and not here. It will also mean that next-generation applications and services will just be unusable by Americans stuck on upload throttled cable systems. The absence of a major fiber infrastructure investment by the government effectively means many of us will be stuck in the past while paying monopoly rents to cable.

Fiber Is Not “Too Expensive,” Nor Should Investment Go to Starlink (SpaceEx) No Matter What Its Lobbyists Say

The current FCC is still reeling from the fact that the outgoing FCC at the end of its term granted nearly $1 billion to Starlink to cover a lot of questionable things. Despite the company historically saying it did not need a dollar of subsidy. And the actual fact that it really does not.

But if government money is on the table, it appears clear that Starlink will deploy its lobbyists to argue for its share, even when it needs none of it. That should be a clear concern to any congressional office that will be lobbied on the broadband infrastructure package. Under the current draft of the Accessible, Affordable Internet for All Act, it seems very unlikely that Starlink’s current deployment will qualify for any money, and certainly not if future proofing was included as a condition of receiving federal dollars. 

This is due to the fact that satellites are inherently capacity-constrained as a means to deliver broadband access. Each satellite deployed must maintain line of sight, can carry only so much traffic, require more and more satellites to share the burden for more capacity, and ultimately need to beam down to a fiber-connected base station. It is this capacity constraint that is why Starlink will never be competitive in cities. But Starlink does benefit from a focus on fiber in that the more places its base stations can connect between fiber and satellites, the more robust the network itself will become.

But in the end, there is no way for those satellites to keep up with the expected increases in capacity that fiber infrastructure will yield nor are they as long-lasting an investment as each satellite needs to be replaced at a fairly frequent basis as new ones are launched while fiber will remain useful once laid for decades. While they will argue that they are a cheaper solution for rural Americans, the fact is the number of Americans that cannot feasibly get a fiber line is extremely small. Basically, if you can get an electrical line to a house, then you can get them a fiber line.

For policymakers, Starlink’s service is best understood as the equivalent of a universal basic income of broadband access where it reaches far and wide and establishes a robust floor.  That on its own has a lot of value and is a reason why its effort to expand to boats, trucks, and airplanes is a good thing. But this is not the tool of long-term economic development for rural communities. It should be understood as a lifeline when all other options are exhausted rather than the frontal solution by the government for ending the digital divide. Lastly, given the fact that the satellite constellation is meant to serve customers globally and not just in the United States, it makes no sense for the United States to subsidize a global infrastructure to enrich one private company. The investments need to be in domestic infrastructure.

The Future Is Not 5G, No Matter What Wireless Carrier Lobbyists Say

For years, the wireless industry hyped 5G broadband, resulting in a lot of congressional hearings, countless hours by the FCC focusing its regulatory power on it, a massive merger between T-Mobile and Sprint, and very little to actually show for it today. In fact, early market analysis has found that 5G broadband is making zero profits, mostly because people are not willing to pay that much more on their wireless bill for the new network. The reality is 5G’s future is likely to be in non-broadband markets that have yet to emerge. But most importantly, national 5G coverage does not happen if you don’t have dense fiber networks everywhere. Any infrastructure plan that comes out of the government should avoid making 5G as part of its core focus given that it is the derivative benefit of fiber. You can have fiber without 5G, but you can’t have 5G without the fiber.

So even when companies like AT&T argue for 5G to be the infrastructure plan, the wireless industry has slowly started to come around to the fact that it's actually the fiber in the end that matters. Last year, the wireless industry acknowledged that 5G and fiber are linked and even AT&T is emphasizing now that the future is fiber. The risks are great if we put wireless on par with wires in infrastructure policy as we are seeing now with the Rural Development Opportunity Fund giving money to potentially speculative gigabit wireless bids instead of proven fiber to the home. This has prompted a lot of Members of Congress to ask the FCC to double-check the applicants before money goes out the door. Rather than repeat the RDOF mistakes, it's best to understand that infrastructure means the foundation of what delivers these services and not the services themselves.

We Can Get Fiber to Everyone If We Embrace the Public Model of Broadband Access

The industry across the board will refuse to acknowledge that the private model of broadband has failed many communities before and during the pandemic, whereas the public model of broadband has soared in areas where it exists. In both President Biden’s plan and the Clyburn/Klobuchar legislation is an emphasis on embracing local government and local community broadband networks. The Affordable, Accessible Internet for All Act outright bans states from preventing local governments from building broadband service. Those state laws that will be repealed by the bill were primarily driven by the cable lobby afraid of cheap public fiber access

By today we know their opposition to community fiber is premised on keeping broadband prices exceedingly high. We know now that if you eliminate the profit motive in delivering fiber, there is almost no place in this country that can’t be connected to fiber. When we have cooperatives delivering gigabit fiber at $100 a month to an average of 2.5 people per mile, one is hard-pressed in finding what areas are left out of reach. But equally important is making access affordable to low-income people, and given that we’ve seen municipal fiber offer ten years of free 100/100 mbps to 28,000 students of low-income families at a subsidy cost of $3.50 a month, it seems clear that public fiber is an essential piece to solving both the universal access challenge as well as making sure all people can afford to connect to the internet. Whatever infrastructure bill passes Congress, it must fully embrace the public model of access for fiber for all to be a reality.

 

Forced Arbitration Thwarts Legal Challenge to AT&T’s Disclosure of Customer Location Data

EFF - Wed, 04/14/2021 - 12:26pm

Location data generated from our cell phones paint an incredibly detailed picture of our movements and private lives. Despite the sensitive nature of this data and a federal law prohibiting cellphone carriers from disclosing it, repeated unauthorized disclosures over the last several years show that carriers will sell this sensitive information to almost any willing buyer.

With cellphone carriers brazenly violating their customers’ privacy and the Federal Communication Commission moving slowly to investigate, it fell to consumers to protect themselves. That’s why in June 2019 EFF filed a lawsuit representing customers challenging AT&T’s unlawful disclosure of their location data. Our co-counsel are lawyers at Hagens Berman Sobol Shapiro LLP. The case, Scott v. AT&T, alleged that AT&T had violated a federal privacy law protecting cellphone customers’ location data, among other protections.

How AT&T Compelled Arbitration

That legal challenge, however, quickly ran into an all-too-familiar roadblock: the arbitration agreements AT&T forces on its customers to sign every time they buy a cellphone or new service from the company. AT&T claimed that this clause prevented the Scott case from proceeding.

The court ended up dismissing the plaintiffs’ lawsuit earlier this year. The way it did so demonstrates why Congress needs to change federal law so that the public can meaningfully protect themselves from companies’ abusive practices.

In response to the lawsuit, AT&T first moved to compel the plaintiffs to arbitration, arguing that because they had signed arbitration agreements—buried deep within an ocean of contract terms—they had no right to sue.

But under California law, AT&T cannot enforce contracts, like those at issue here, which prevent people from seeking court orders, called “public injunctions,” to prevent future harm to the public. California law also recognizes that these one-sided “contracts of adhesion” can sometimes be so unfair that they cannot be enforced. We argued that both of these principles voided AT&T’s contracts, emphasizing that our clients sought to prevent AT&T from disclosing all customers’ location data without their notice and consent to protect the broader public’s privacy and to prevent AT&T from publicly misrepresenting its practices moving forward.

AT&T responded by moving to dismiss the public injunction claims, asserting that because the company stopped disclosing customer location data to certain third parties identified in media reports, plaintiffs had no legal basis–known as standing–to seek a public injunction. AT&T’s strategy was clear: rather than admit that it had done anything wrong in the past, the company argued that because it had stopped disclosing customer location data, there was no future public harm that the court needed to prohibit via a public injunction. Because no public injunction was necessary, AT&T argued, California’s rule against the arbitration agreements did not apply and the plaintiffs remained subject to them.

We did not trust AT&T’s representations that it stopped disclosing customer location data, particularly because the company had previously promised to stop disclosing the same data, only for media reports to later show that the disclosures were ongoing. Additionally, AT&T was not clear about whether it had other programs or services that disclosed the same location data without customers’ knowledge and consent.

The plaintiffs spent months trying to learn more about AT&T’s location data sharing practices in the face of AT&T’s stonewalling. What we found was concerning: AT&T continued to disclose customer location data, including to enable commercial call routing by third-party services, without customers’ notice and consent. We asked the court to let the case proceed, arguing that this information undercut AT&T’s claims that it had stopped its harmful practices.

The court sided with AT&T. It ruled that the evidence did not establish that there was an ongoing risk that AT&T would disclose customer location data in the future and thus plaintiffs lacked standing to seek a public injunction. Next, the court upheld the legality of AT&T’s one-sided contracts and ruled that plaintiffs could be forced into arbitration.

We disagree with the court’s ruling in multiple respects. The court largely ignored evidence in the record showing that AT&T continues to disclose customer location data, putting all of its customers’ privacy at risk. It also mischaracterized plaintiffs’ allegations, allowing the court to avoid having to wrestle with AT&T’s ongoing privacy failures. Finally, the court failed to protect consumers subject to AT&T’s one-sided arbitration agreements—these contracts are fundamentally unfair and their continued enforcement is unjust.

Importantly, the court did not rule on the merits: it did not decide whether AT&T’s disclosure of customer location data was lawful. Instead, it sidestepped that question by deciding that the plaintiffs’ case didn’t belong in federal court.

Next Steps: Legislative Reform of Arbitration Agreements

The court’s decision to enforce AT&T’s arbitration agreement is problematic because it prevents consumers from vindicating their rights under a longstanding federal privacy law written to protect them. Unlike other areas of consumer privacy where comprehensive federal legislation is sorely needed, Congress has already prohibited phone services like AT&T from disclosing customer location data without notice and consent.

The legislative problem this case highlights is different: rather than writing a new law, Congress needs to amend an existing one—the Federal Arbitration Act. Arbitration was originally intended to allow large, sophisticated entities like corporations to avoid expensive legal fights. Today, however, it is used to prevent consumers, employees, and anyone with less bargaining power from having any meaningful redress in court. Congress can easily fix this injustice by prohibiting forced arbitration in one-sided contracts of adhesion, and it’s past time that they did so.

Likewise, when Congress enacts a comprehensive consumer data privacy law, it must bar enforcement of arbitration agreements that unfairly limit user enforcement of their legal rights in court. The better proposed bills do so.

Despite the federal court’s dismissal of the case against AT&T, we remain hopeful that the FCC will take action against the company for its disclosure of location data. The agency began an enforcement proceeding last year, and we hope that once President Biden appoints new FCC leadership, the agency will move quickly to hold AT&T accountable.

Related Cases: Geolocation Privacy

California: Demand Broadband for All

EFF - Tue, 04/13/2021 - 7:11pm

From the pandemic to the Frontier bankruptcy to the ongoing failures in remote learning, we’ve seen now more than ever how current broadband infrastructure fails to meet the needs of the people. This pain is particularly felt in already under-served communitiesurban and rural—where poverty and lack of choice leaves millions at the mercy of monopolistic Internet Service Providers (ISPs) who have functionally abandoned them.

 Take Action

Tell your Senators to Support S.B. 4

This is why EFF is part of a coalition of nonprofits, private-sector companies, and local governments in support of S.B. 4. Authored by California State Senator Lena Gonzalez, the bill would promote construction of the 21st century infrastructure necessary to finally put a dent in, and eventually close, the digital divide in California.

S.B. 4 passed out of the California Senate Energy, Utilities, and Communications Committee by a vote of 11-1 on April 12. This demonstrates that lawmakers who understand these issues recognize it is vital for communities who are suffering at the hands of ISP monopolies to have greater opportunities to get the Internet access they need.

If the monopolistic ISPs didn’t come to deliver adequate service during a time when many Californians' entire lives depended on the quality of their broadband, they aren’t coming now. It is high time local communities are allowed to take the future into their own hands and build out what they need. S.B. 4 is California’s path to doing so.

 TAKE ACTION

TELL YOUR SENATORS TO SUPPORT S.B. 4

Why EFF Supports Repeal of Qualified Immunity

EFF - Mon, 04/12/2021 - 5:28pm

Our digital rights are only as strong as our power to enforce them. But when we sue government officials for violating our digital rights, they often get away with it because of a dangerous legal doctrine called “qualified immunity.”

Do you think you have a First Amendment right to use your cell phone to record on-duty police officers, or to use your social media account to criticize politicians? Do you think you have a Fourth Amendment right to privacy in the content of your personal emails? Courts often protect these rights. But some judges invoke qualified immunity to avoid affirmatively recognizing them, or if they do recognize them, to avoid holding government officials accountable for violating them.

Because of these evasions of judicial responsibility to enforce the Constitution, some government officials continue to invade our digital rights. The time is now for legislatures to repeal this doctrine.

What is Qualified Immunity?

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark law empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

In 1967, the U.S. Supreme Court first created a “good faith” defense against claims for damages (i.e., monetary compensation) under this law. In 1982, the Court broadened this defense, to create immunity from damages if the legal right at issue was not “clearly established” at the time the official violated it. Thus, even if a judge holds that a constitutional right exists, and finds that a government official violated this right, the official nonetheless is immune from paying damages—if that right was not “clearly established” at the time.

Qualified immunity directly harms people in two ways. First, many victims of constitutional violations are not compensated for their injury. Second, many more people suffer constitutional violations, because the doctrine removes an incentive to government officials to follow the Constitution.

The consequences are shocking. For example, though a judge held that these abusive acts violated the Constitution, the perpetrators evaded responsibility through qualified immunity when:

  • Jail officials subjected a detainee to seven months of solitary confinement because he asked to visit the commissary.
  • A police officer pointed a gun at a man’s head, though he had already been searched, was calmly seated, and was being guarded by a second officer.

It gets worse. Judges had been required to engage in a two-step qualified immunity analysis. First, they determined whether the government official violated a constitutional right—that is, whether the right in fact exists. Second, they determined whether that right was clearly established at the time of the incident in question. But in 2009, the U.S. Supreme Court held that a federal judge may skip the first step, grant an official qualified immunity, and never rule on what the law is going forward.

As a result, many judges shirk their responsibility to interpret the Constitution and protect individual rights. This creates a vicious cycle, in which legal rights are not determined, allowing government officials to continue harming the public because the law is never “clearly established.” For example, judges declined to decide whether these abuses were unconstitutional:

  • A police officer attempted to shoot a nonthreatening pet dog while it was surrounded by children, and in doing so shot a child.
  • Police tear gassed a home, rendering it uninhabitable for several months, after a resident consented to police entry to arrest her ex-boyfriend.

In the words of one frustrated judge:

The inexorable result is “constitutional stagnation”—fewer courts establishing law at all, much less clearly doing so. Section 1983 meets Catch-22. Plaintiffs must produce precedent even as fewer courts are producing precedent. Important constitutional questions go unanswered precisely because no one’s answered them before. Courts then rely on that judicial silence to conclude there’s no equivalent case on the books. No precedent = no clearly established law = no liability. An Escherian Stairwell. Heads government wins, tails plaintiff loses.

Qualified Immunity Harms Digital Rights

Over and over, qualified immunity has undermined judicial protection of digital rights. This is not surprising. Many police departments and other government agencies use high-tech devices in ways that invade our privacy or censor our speech. Likewise, when members of the public use novel technologies in ways government officials dislike, they often retaliate. Precisely because these abuses concern cutting-edge tools, there might not be clearly established law. This invites qualified immunity defenses against claims of digital rights violations.

Consider the First Amendment right to use our cell phones to record on-duty police officers. Federal appellate courts in the First, Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right. (EFF has advocated for this right in many amicus briefs.)

Yet last month, in a case called Frasier v. Evans, the Tenth Circuit held that this digital right was not clearly established. Frasier had used his tablet to record Denver police officers punching a suspect in the face as his head bounced off the pavement. Officers then retaliated against Frasier by detaining him, searching his tablet, and attempting to delete the video. The court granted the officers qualified immunity, rejecting Frasier’s claim that the officers violated the First Amendment.

Even worse, the Tenth Circuit refused to rule on whether, going forward, the First Amendment protects the right to record on-duty police officers. The court wrote: “we see no reason to risk the possibility of glibly announcing new constitutional rights … that will have no effect whatsoever on the case.” But a key function of judicial precedent is to protect the public from further governmental abuses. Thus, when the Third Circuit reached this issue in 2017, while it erroneously held that this right was not clearly established, it properly recognized this right going forward.

Qualified immunity has harmed other EFF advocacy for digital rights. To cite just two examples:

  • In Rehberg v. Palk, we represented a whistleblower subjected to a bogus subpoena for his personal emails. The court erroneously held it was not clearly established that the Fourth Amendment protects email content, and declined to decide this question going forward.
  • In Hunt v. Regents, we filed an amicus brief arguing that a public university violated the First Amendment by disciplining a student for their political speech on social media. The court erroneously held that the student’s rights were not clearly established, and declined to decide the issue going forward.

The Movement to Repeal Qualified Immunity

A growing chorus of diverse stakeholders, ranging from the Cato Institute and the Institute of Justice to the ACLU, is demanding legislation to repeal this destructive legal doctrine. A recent “cross-ideological” amicus brief brought together the NAACP and the Alliance Defending Freedom. Activists against police violence also demand repeal.

This movement is buoyed by legal scholars who show the doctrine has no support in the 1871 law’s text and history. Likewise, judges required to follow the doctrine have forcefully condemned it.

Congress is beginning to heed the call. Last month, the U.S. House of Representatives passed the George Floyd Justice in Policing Act (H.R. 1280), which would repeal qualified immunity as to police. Even better, the Ending Qualified Immunity Act (S. 492) would repeal it as to all government officials. It was originally introduced by Rep. Ayanna Pressley (D-Mass.) and Rep. Justin Amash (L-Mich.).

States and cities are doing their part, too. Colorado, New Mexico, and New York City recently enacted laws to allow lawsuits against police misconduct, with no qualified immunity defense. A similar bill is pending in Illinois.

Next Steps

EFF supports legislation to repeal qualified immunity—a necessary measure to ensure that when government officials violate our digital rights, we can turn to the courts for justice. We urge you to do the same.

After Cookies, Ad Tech Wants to Use Your Email to Track You Everywhere

EFF - Mon, 04/12/2021 - 4:41pm

Cookies are dying, and the tracking industry is scrambling to replace them. Google has proposed Federated Learning of Cohorts (FLoC), TURTLEDOVE, and other bird-themed tech that would have browsers do some of the behavioral profiling that third-party trackers do today. But a coalition of independent surveillance advertisers has a different plan. Instead of stuffing more tracking tech into the browser (which they don’t control), they’d like to use more stable identifiers, like email addresses, to identify and track users across their devices.

There are several proposals from ad tech providers to preserve “addressable media” (read: individualized surveillance advertising) after cookies die off. We’ll focus on just one: Unified Identifier 2.0, or UID2 for short, developed by independent ad tech company The Trade Desk. UID2 is a successor to The Trade Desk’s cookie-based “unified ID.” Much like FLoC, UID2 is not a drop-in replacement for cookies, but aims to replace some of their functionality. It won’t replicate all of the privacy problems of third-party cookies, but it will create new ones. 

There are key differences between UID2 and Google’s proposals. FLoC will not allow third-party trackers to identify specific people on its own. There are still big problems with FLoC: it continues to enable auxiliary harms of targeted ads, like discrimination, and it bolsters other methods of tracking, like fingerprinting. But FLoC’s designers intend to move towards a world with less individualized third-party tracking. FLoC is a misguided effort with some laudable goals.

In contrast, UID2 is supposed to make it easier for trackers to identify people. It doubles down on the track-profile-target business model. If UID2 succeeds, faceless ad tech companies and data brokers will still track you around the web—and they’ll have an easier time tying your web browsing to your activity on other devices. UID2’s proponents want advertisers to have access to long-term behavioral profiles that capture nearly everything you do on any Internet-connected device, and they want to make it easier for trackers to share your data with each other. Despite its designers’ ill-taken claims around “privacy” and “transparency,” UID2 is a step backward for user privacy.

How Does UID2 Work?

In a nutshell, UID2 is a series of protocols for collecting, processing, and passing around users’ personally-identifying information (“PII”). Unlike cookies or FLoC, UID2 doesn’t aim to change how browsers work; rather, its designers want to standardize how advertisers share information. The UID2 authors have published a draft technical standard on Github. Information moves through the system like this:

  1. A publisher (like a website or app) asks a user for their personally-identifying information (PII), like an email address or a phone number. 
  2. The publisher shares that PII with a UID2 “operator” (an ad tech firm).
  3. The operator hashes the PII to generate a “Unified Identifier” (the UID2). This is the number that identifies the user in the system.
  4. A centralized administrator (perhaps The Trade Desk itself) distributes encryption keys to the operator, who encrypts the UID2 to generate a “token.” The operator sends this encrypted token back to the publisher.
  5. The publisher shares the token with advertisers.
  6. Advertisers who receive the token can freely share it thro (PIIughout the advertising supply chain.
  7. Any ad tech firm who is a “compliant member” of the ecosystem can receive decryption keys from the administrator. These firms can decrypt the token into a raw identifier (a UID2). 
  8. The UID2 serves as the basis for a user profile, and allows trackers to link different pieces of data about a person together. Raw UID2s can be shared with data brokers and other actors within the system to facilitate the merging of user data.

The description of the system raises several questions. For example:

  • Who will act as an “administrator” in the system? Will there be one or many, and how will this impact competition on the Internet? 
  • Who will act as an “operator?” Outside of operators, who will the “members” of the system be? What responsibilities towards user data will these actors have?
  • Who will have access to raw UID2 identifiers? The draft specification implies that publishers will only see encrypted tokens, but most advertisers and data brokers will see raw, stable identifiers.

What we do know is that a new identifier, the UID2, will be generated from your email. This UID2 will be shared among advertisers and data brokers, and it will anchor their behavioral profiles about you. And your UID2 will be the same across all your devices.

How Does UID2 Compare With Cookies?

Cookies are associated with a single browser. This makes it easy for trackers to gather browsing history. But they still need to link cookie IDs to other information—often by working with a third-party data broker—in order to connect that browsing history to activity on phones, TVs, or in the real world. 

UID2s will be connected to people, not devices. That means an advertiser who collects UID2 from a website can link it to the UID2s it collects through apps, connected TVs, and connected vehicles belonging to the same person. That’s where the “unified” part of UID2 comes in: it’s supposed to make cross-device tracking as easy as cross-site tracking used to be.

UID2 is not a drop-in replacement for cookies. One of the most dangerous features of cookies is that they allow trackers to stalk users “anonymously.” A tracker can set a cookie in your browser the first time you open a new window; it can then use that cookie to start profiling your behavior before it knows who you are. This “anonymous” profile can then be used to target ads on its own (“we don’t know who this person is, but we know how they behave”) or it can be stored and joined with personally-identifying information later on.

In contrast, the UID2 system will not be able to function without some kind of input from the user. In some ways, this is good: it means if you refuse to share your personal information on the Web, you can’t be profiled with UID2. But this will also create new incentives for sites, apps, and connected devices to ask users for their email addresses. The UID2 documents indicate that this is part of the plan: 

Addressable advertising enables publishers and developers to provide the content and services consumers have come to enjoy, whether through mobile apps, streaming TV, or web experiences. … [UID2] empowers content creators to have the value exchange conversations with consumers while giving them more control and transparency over their data.

The standard authors take for granted that “addressable advertising” (and tracking and profiling) is necessary to keep publishers in business (it’s not). They also make it clear that under the UID2 framework, publishers are expected to demand PII in exchange for content.

How UID2 will work on websites, according to the documentation.

This creates bad new incentives for publishers. Some sites already require log-ins to view content. If UID2 takes off, expect many more ad-driven websites to ask for your email before letting you in. With UID2, advertisers are signaling that publishers will need to acquire, and share, users’ PII before they can serve the most lucrative ads. 

Where Does Google Fit In?

In March, Google announced that it “will not build alternate identifiers to track individuals as they browse across the web, nor... use them in [its] products.” Google has clarified that it won’t join the UID2 coalition, and won’t support similar efforts to enable third-party web tracking. This is good news—it presumably means that advertisers won’t be able to target users with UID2 in Google’s ad products, the most popular in the world. But UID2 could succeed despite Google’s opposition.

Unified ID 2.0 is designed to work without the browser’s help. It relies on users sharing personal information, like email addresses, with the sites they visit, and then uses that information as the basis for a cross-context identifier. Even if Chrome, Firefox, Safari, and other browsers want to rein in cross-site tracking, they will have a hard time preventing websites from asking for a user’s email address.

Google’s commitment to eschew third-party identifiers doesn’t mean said identifiers are going away. And it doesn’t justify creating new targeting tech like FLoC. Google may try to present these technologies as alternatives, and force us to choose: see, FLoC doesn’t look so bad when compared with Unified ID 2.0. But this is a false dichotomy. It’s more likely that, if Google chooses to deploy FLoC, it will complement—not replace—a new generation of identifiers like UID2.

UID2 focuses on identity, while FLoC and other “privacy sandbox” proposals from Google focus on revealing trends in your behavior. UID2 will help trackers capture detailed information about your activity on the apps and websites to which you reveal your identity. FLoC will summarize how you interact with the rest of the sites on the web. Deployed together, they could be a potent surveillance cocktail: specific, cross-context identifiers connected to comprehensive behavioral labels.

What Happens Next?

UID2 is not a revolutionary technology. It’s another step in the direction that the industry has been headed for some time. Using real-world identifiers has always been more convenient for trackers than using pseudonymous cookies. Ever since the introduction of the smartphone, advertisers have wanted to link your activity on the Web to what you do on your other devices. Over the years, a cottage industry has developed among data brokers, selling web-based tracking services that link cookie IDs to mobile ad identifiers and real-world info. 

The UID2 proposal is the culmination of that trend. UID2 is more of a policy change than a technical one: the ad industry is moving away from the anonymous profiling that cookies enabled, and is planning to demand email addresses and other PII instead. 

The demise of cookies is good. But if tracking tech based on real-world identity replaces them, it will be a step backward for users in important ways. First, it will make it harder for users in dangerous situations—for whom web activity could be held against them—to access content safely. Browsing the web anonymously may become more difficult or outright impossible. UID2 and its ilk will likely make it easier for law enforcement, intelligence agencies, militaries, and private actors to buy or demand sensitive data about real people.

Second, UID2 will incentivize ad-driven websites to erect “trackerwalls,” refusing entry to users who’d prefer not to share their personal information. Though its designers tout “consent” as a guiding principle, UID2 is more likely to force users to hand over sensitive data in exchange for content. For many, this will not be a choice at all. UID2 could normalize “pay-for-privacy,” widening the gap between those who are forced to give up their privacy for first-class access to the Internet, and those who can afford not to.

Deceptive Checkboxes Should Not Open Our Checkbooks

EFF - Fri, 04/09/2021 - 7:18pm

Last week, the New York Times highlighted the Trump 2020 campaign’s use of deceptive web designs to deceive supporters into donating far more money than they had intended. The campaign’s digital donation portal hid an unassuming but unfair method for siphoning funds: a pre-checked box to “make a monthly recurring donation.” This caused weekly withdrawals from supporters’ bank accounts, with some being depleted.

The checkbox in question, from the New York Times April 3rd piece.

A pre-checked box to donate more than you intended is just one  example of a “dark pattern”—a term coined by user experience (UX) designer Harry Brignull to define tricks used in websites and apps that make you do things that you didn't mean to, such as buying a service. Unfortunately, dark patterns are widespread. Moreover, the pre-checked box is a particularly common way to subvert our right to consent to serious decisions, or to withhold our consent. This ruse dupes us into “agreeing” to be signed up for a mailing list, having our data shared with third party advertisers, or paying recurring donations. Some examples are below.

A screenshot of the November 3rd, 2020 donation form from WinRed on donaldjtrump.com, which shows two pre-checked boxes: one for monthly donations, and one for an additional automatic donation of the same amount on an additional date.

The National Republican Congressional Committee, which uses the same WinRed donation flow that the Trump campaign utilizes, displays two instances of the pre-checked boxes.

A screenshot of the National Republican Congressional Committee donation site (nrcc.org), from the WayBack Machine’s crawl on November 3rd, 2020.

The Democratic Congressional Campaign Committee’s donation site, using ActBlue software, shows a pre-selected option for monthly donations. The placement is larger and the language is much clearer for what users should expect around monthly contributions. However, this may also require careful observation from users who intend to donate only once.

A screenshot from August 31, 2020 of a pre-selected option for monthly contributions on the Democratic Congressional Campaign Committee (dccc.org).

What’s Wrong with a Dark Pattern Using Pre-Selected Recurring Options? 

Pre-selected options, such as pre-checked boxes, are common and not limited to the political realm. Organizations understandably seek financial stability by asking their donors for regular, recurring contributions. However, the approach of pre-selecting a recurring contribution can deprive donors of choice and undermine their trust. At best, this stratagem manipulates a user’s emotions by suggesting they are supposed to give more than once. More maliciously, it preys on the likely chance that a user passively skimming doesn’t notice a selected option. Whereas, requiring a user to click an option to consent to contribute on a recurring basis puts the user in an active position of decision-making. Defaults matter: whether making donations monthly is set as “yes, count me in” by default or “no, donate once” by default.

So, does a pre-selected option indicate consent? A variety of laws across the globe have aimed to minimize the use of these pre-selected checkboxes, but at present, most U.S. users are protected by no such law. Unfortunately, some U.S. courts have even ruled that pre-selected boxes (or “opt-out” models) do represent express consent. By contrast, Canadian spam laws require a separate box, not pre-checked, for email opt-ins. Likewise, the European Union’s GDPR has banned the use of pre-selected checkboxes for allowing cookies on web pages. But for now, much of the world’s users are at the whims of deceptive product teams when it comes to the use of pre-selected checkboxes like these. 

Are there instances in which it’s okay to use a pre-selected option as a design element? For options that don’t carry much weight beyond what the user expects (that is, consistent with their expectations of the interaction), a pre-selected option may be appropriate. One example might be if a user clicks a link with language like “become a monthly donor,” and ends up on a page with a pre-selected monthly contribution option. It also might be appropriate to use a pre-selected option to send a confirmation email of the donation. This is very different than, for example, adding unexpected items onto a user’s cart before processing a donation that unexpectedly shows up on their credit card bill later. 

How Do We Better Protect Users and Financial Contributors?

Dark patterns are ubiquitous in websites and apps, and aren’t limited to financial contributions or email signups. We must build a new landscape for users.

UX designers, web developers, and product teams must ensure genuine user consent when designing interfaces. A few practices for avoiding dark patterns include:

  • Present opt-in, rather than opt-out flows for significant decisions, such as whether  to share data or to donate on a monthly level (e.g. no pre-selected options for recurring contributions).
  • Avoid manipulative language. Options should tell the user what the interaction will do, without editorializing (e.g. avoid “if you UNCHECK this box, we will have to tell __ you are a DEFECTOR”).
  • Provide explicit notice for how user data will be used.
  • Strive to meet web accessibility practices, such as aiming for plain, readable language (for example, avoiding the use of double-negatives).
  • Only use a pre-selected option for a choice that doesn’t obligate users to do more than they are comfortable with. For example, EFF doesn’t assume all of our donors want to become EFF members: users are given the option to uncheck the “Make me a member” box. Offering this choice allows us to add a donor to our ranks as a member, but doesn’t obligate them to anything. 

We also need policy reform. As we’ve written, we support user-empowering laws to protect against deceptive practices by companies. For example, EFF supported regulations to protect users against dark patterns, issued under the California Consumer Privacy Act. 

EFF Challenges Surreptitious Collection of DNA at Iowa Supreme Court

EFF - Fri, 04/09/2021 - 3:35pm

Last week, EFF, along with the ACLU and the ACLU of Iowa, filed an amicus brief in the Iowa Supreme Court challenging the surreptitious collection of DNA without a warrant. We argued this practice violates the Fourth Amendment and Article I, Section 8 of the Iowa state constitution. This is the first case to reach a state supreme court involving such a challenge after results of a genetic genealogy database search linked the defendant to a crime.

The case, State v. Burns, involves charges from a murder that occurred in 1979. The police had no leads in the case for years, even after modern technology allowed them to extract DNA from blood left at the crime scene and test it against DNA collected in government-run arrestee and offender DNA databases like CODIS

In 2018, the police began working with a company called Parabon Nanolabs, which used the forensic DNA profile to predict the physical appearance of the alleged perpetrator and to generate an image that the Cedar Rapids Police Department released to the public. That image did not produce any new leads, so the police worked with Parabon to upload the DNA profile to a consumer genetic genealogy database called GEDMatch, which we’ve written about in the past. Through GEDMatch, the police linked the crime scene DNA to three brothers, including the defendant in this case, Jerry Burns. Police then surveilled Mr. Burns until they could collect something containing his DNA. The police found a straw he used and left behind at a restaurant, extracted a profile from DNA left on the straw, matched it to DNA found at the crime scene, and arrested Mr. Burns.

The State claims that the Fourth Amendment doesn’t apply in this context because Mr. Burns abandoned his privacy interest in his DNA when he left it behind on the straw. However, we argue the Fourth Amendment creates a high bar against collecting DNA from free people, even if it’s found on items the person has voluntarily discarded. In 1978, the Supreme Court ruled that the Fourth Amendment does not protect the contents of people’s trash left for pickup because they have “abandoned” an expectation of privacy in the trash. But unlike a gum wrapper or a cigarette butt or the straw in this case, our DNA contains so much private information that the data contained in a DNA sample can never be “abandoned.” Even if police don’t need a warrant to rummage through your trash (and many states disagree on this point), Police should need a warrant to rummage through your DNA. 

A DNA sample—whether taken directly from a person or extracted from items that person leaves behind—contains a person’s entire genetic makeup. It can reveal intensely sensitive information about us, including our propensities for certain medical conditions, our ancestry, and our biological familial relationships. Some researchers have also claimed that human behaviors such as aggression and addiction can be explained, at least in part, by genetics. And private companies have claimed they can use our DNA for everything from identifying our eye, hair, and skin colors and the shapes of our faces; to determining whether we are lactose intolerant, prefer sweet or salty foods, and can sleep deeply; to discovering the likely migration patterns of our ancestors and the identities of family members we never even knew we had.

Despite the uniquely revealing nature of DNA, we cannot avoid leaving behind the whole of our genetic code wherever we go. Humans are constantly shedding genetic material; In less time than it takes to order a coffee, most humans lose nearly enough skin cells to cover an entire football field. The only way to avoid depositing our DNA on nearly every item we touch out in the world would be to never leave one’s home. For these reasons, as we argue in our brief, we can never abandon a privacy interest in our DNA.

The Burns case also raises thorny Fourth Amendment issues related to law enforcement use of consumer genetic genealogy databases. We’ve written about these issues before, and, unfortunately, the process of searching genetic genealogy databases in criminal investigations has become quite common. Estimates are that genetic genealogy sites were used in around 200 cases just in 2018 alone. This is because more than 26 million people have uploaded their genetic data to sites like GEDmatch to try to identify biological relatives, build a family tree, and learn about their health. These sites are available to anyone and are relatively easy to use. And many sites, including GEDMatch, lack any technical restrictions that would keep the police out. As a result, law enforcement officers have been capitalizing on all this freely available data in criminal investigations across the country. And in none of the cases we’ve reviewed, including Burns, have officers ever sought a warrant or any legal process at all before searching the private database. 

Police access to this data creates immeasurable threats to our privacy. It also puts us at much greater risk of being accused of crimes we didn’t commit. For example, in 2015, a similar forensic genetic genealogy search led police to suspect an innocent man. Even without genetic genealogy searches, DNA matches may lead officers to suspect—and jail—the wrong person, as happened in a California case in 2012. That can happen because our DNA may be transferred from one location to another, possibly ending up at the scene of a crime, even if we were never there.

Even if you yourself never upload your genetic data to a genetic genealogy website, your privacy could be impacted by a distant family member’s choice to do so. Although GEDmatch’s 1.3 million users only encompass about 0.5% of the U.S. adult population, research shows that their data alone could be used to identify 60% of white Americans. And once GEDmatch’s users encompass just 2% of the U.S. population, 90% of white Americans will be identifiable. Other research has shown that adversaries may be able to compromise these databases to put many users at risk of having their genotypes revealed, either at key positions or at many sites genome-wide. 

This is why this case and others like it are so important—and why we need strong rules against police access to genetic genealogy databases. Our DNA can reveal so much about us that our genetic privacy must be protected at all costs. 

We hope the Iowa Supreme Court and other courts addressing this issue will recognize that the Fourth Amendment protects us from surreptitious collection and searches of our DNA.

Related Cases: People v. Buza

Am I FLoCed? A New Site to Test Google's Invasive Experiment

EFF - Fri, 04/09/2021 - 3:22pm

Today we’re launching Am I FLoCed, a new site that will tell you whether your Chrome browser has been turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment. If you are a subject, we will tell you how your browser is describing you to every website you visit. Am I FLoCed is one of an effort to bring to light the invasive practices of the adtech industry—Google included—with the hope we can create a better internet for all, where our privacy rights are respected regardless of how profitable they may be to tech companies.

FLoC is a terrible idea that should not be implemented. Google’s experimentation with FLoC is also deeply flawed. We hope that this site raises awareness about where the future of Chrome seems to be heading, and why it shouldn't.

FLoC takes most of your browsing history in Chrome, and analyzes it to assign you to a category or “cohort.” This identification is then sent to any website you visit that requests it, in essence telling them what kind of person Google thinks you are. For the time being, this ID changes every week, hence leaking new information about you as your browsing habits change. You can read a more detailed explanation here.

Because this ID changes, you will want to visit https://amifloced.org often to see those changes.

Why is this happening?

Users have been demanding more and more respect from big business for their online privacy, realizing that the false claim “privacy is dead” was nothing but a marketing campaign. The biggest players that stand to profit from privacy invasion are those from the behavioural targeting industry.

Some companies and organizations have listened to users’ requests and improved some of their practices, giving more security and privacy assurances to their users. But most have not. This entire industry sells its intricate knowledge about people in order to target them for advertisement, most notably Google and Facebook, but also many other data brokers with names you’ve probably never heard before.

The most common way these companies identify you is by using “cookies” to track every movement you make on the internet. This relies on a tracking company convincing as many sites as possible to install their tracking cookie. But with tracking protections being deployed via browser extensions like Privacy Badger, or in browsers like Firefox and Safari, this has become more difficult. Moreover, stronger privacy laws are coming. Thus, many in the adtech industry have realized that the end is near for third-party tracking cookies.

While some cling to the old ways, others are trying to find new ways to keep tracking users, monetizing their personal information, without third-party cookies. These companies will use the word “privacy” in their marketing, and try to convince users, policy makers, and regulators that their solutions are better for users and the market. Or they will claim the other solutions are worse, creating a false impression that users have to choose between “bad” and ”worse.”

But our digital future should not be one where an industry keeps profiting from privacy violations, but one where our rights are respected.

The Google Proposal

Google announced the launch of its FLoC test with a recent blogpost. It contains lots of mental gymnastics to twist this terrible idea into the semblance of a privacy-friendly endeavour.

Perhaps most disturbing is the notion that FLoC’s cohorts are not based on who you are as an individual. The reality is FLoC uses your detailed and unique browsing history to assign you to a cohort. The number of people in a cohort is tailored to still be useful to advertisers, and according to some of Google’s own research it is 95% effective, meaning cohorts are a marginal improvement over cookies on privacy.

FLoC might not share your detailed browsing history. But we reject the notion of “because it’s in your device it’s private.” If data is used to infer something about you, about who you are, and how you can be targeted, and then shared with other sites and advertisers, then it’s not private at all.

And let's not forget that Google Sync already shares your detailed Chrome browsing history with Google when enabled by default.

The sole intent of FLoC is to keep the status quo of surveillance capitalism, with a vague appearance of user choice. It cements even more the dependability on “Google’s benevolence” and access to the internet. A misguided belief that Google is our friendly corporate overlord, that they know better, and that we should sign out our rights in exchange for crumbs for the internet to survive.

Google has also made unsubstantiated statements like “FLoC allows you to remain anonymous as you browse across websites and also improves privacy by allowing publishers to present relevant ads to large groups (called cohorts),” but as far as we can tell, FLoC does not make you anonymous in any way. Only a few browsers, like Tor, can accurately make such difficult claims. Now with FLoC, your browser is still telling sites something about your behavior. Google cannot equate grouping users into advertising cohorts with “anonymity.”

This experiment is irresponsible and antagonistic to users. FLoC, with marginal improvements on privacy, is riddled with issues, and yet is planned to be rolled out to millions of users around the world with no proper notification, opt-in consent, or meaningful individual opt-out at launch.

This is not just one more Chrome experiment. This is a fundamental change to the browser and how people are exploited for their data. After all the pushback, concerns, and issues, the fact that Google has chosen to ignore the warnings is telling of where the company stands with regard to our privacy.

Try it!

What Movie Studios Refuse to Understand About Streaming

EFF - Wed, 04/07/2021 - 3:57pm

The longer we live in the new digital world, the more we are seeing it replicate systemic issues we’ve been fighting for decades. In the case of movie studios, what we’ve seen in the last few years in streaming mirrors what happened in the 1930s and ‘40s, when a small group of movie studios also controlled the theaters that showed their films. And by 1948, the actions of the studios were deemed violations of antitrust law, resulting in a consent decree. The Justice Department ended that decree in 2019 under the theory that the remaining studios could not “reinstate their cartel.” Maybe not in physical movie theaters. But online is another story.

Back in the ‘30s and ‘40s, the problem was that the major film studios—including Warner Bros. and Universal which exist to this day—owned everything related to the movies they made. They had everyone involved on staff under exclusive and restrictive contracts. They owned the intellectual property. They even owned the places that processed the physical film. And, of course, they owned the movie theaters.

In 1948, the studios were forced to sell off their stakes in movie theaters and chains, having lost in the Supreme Court.

The benefits for audiences were pretty clear. The old system had theaters scheduling showings so that they wouldn’t overlap with each other, so that you could not see a movie at the most convenient theater and most convenient time for you. Studios were also forcing theaters to buy their entire slates of movies without seeing them (called “blind buying”), instead of picking, say, the ones of highest quality or interest—the ones that would bring in audiences. And, of course, the larger chains and the theaters owned by the studios would get preferential treatment.

There is a reason the courts stopped this practice. For audiences, separating theaters from studios meant that their local theaters now had a variety of films, were more likely to have the ones they wanted to see, and would be showing them at the most convenient times. So they didn’t have to search listings for some arcane combination of time, location, and movie.

And now it is 2021. If you consume digital media, you may have noticed something… familiar.

The first wave of streaming services—Hulu, Netflix, iTunes, etc.—had a diversity of content from a number of different studios. And for the back catalog, the things that had already aired, services had all of the episodes available at once. Binge-watching was ascendant.

The value of these services to the audience was, like your local theater, convenience. You pay a set price and can pick from a diverse catalog to watch what you wanted, when you wanted, from the comfort of your home. As they did almost 100 years ago, studios suddenly realized the business opportunity presented in owning every step of the process of making entertainment. It’s just that those steps look different today than they did back then.

Instead of owning the film processing labs, they now own the infrastructure in the form of internet service providers (ISPs). AT&T owns Warner Brothers and HBO. Comcast owns Universal and NBC. And so on.

Instead of having creative people on restrictive contracts they… well, that they still do. Netflix regularly makes headlines for signing big names to exclusive deals. And studios buy up other studios and properties to lock down the rights to popular media. Disney in particular has bought up Star Wars and Marvel in a bid to put as much “intellectual property” under its exclusive control as possible, owning not just movie rights but every revenue stream a story can generate. As the saying goes, no one does to Disney what Disney did to the Brothers Grimm.

Instead of owning theater chains, studios have all launched their own streaming services. And as with theaters, a lot of the convenience has been stripped. Studios saw streaming making money and did not want to let others reap the rewards, so they’ve been pulling their works and putting them under the exclusive umbrella of their own streaming services.

Rather than having a huge catalog of diverse studio material, which is what made Netflix popular to begin with, convenience has been replaced with exclusivity. Of course, much like in the old days, the problem is that people don’t want everything a single studio offers. They want certain things. But a subscription fee isn’t for just what you want, it’s for everything. Much like the old theater chains, we are now blind buying the entire slate of whatever Disney, HBO, Amazon, etc. are offering.

And, of course, they can’t take the chance that we’ll pay the monthly fee once, watch what we’re looking for, and cancel. So a lot of these exclusives are no longer released in binge-watching mode, but staggered to keep us paying every month for new installments. Which is how the new world of streaming is becoming a hybrid of the old world of cable TV and movie theaters.

To watch something online legally these days is a very frustrating search of many, many services. The hope you have is the thing you want is on one of the services you already pay for and not on a new one. Sometimes, you’re looking for something that was on a service you paid for, but has now been locked into another one. Instead of building services that provide the convenience audiences want—with competition driving services to make better and better products for audiences—the value is now in making something rare. Something that can only be found on your service. And even if it is not good, it is at least tied to a popular franchise or some other thing people do not want to be left out of.

Instead of building better services—faster internet access, better interfaces, better content—the model is all based on exclusive control. Many Americans don’t have a choice in their broadband provider, a monopoly ISPs jealously guard rather than building a service so good we’d pick it on purpose. Instead of choosing the streaming service with the best price or library or interface, we have to pay all of them. Our old favorites are locked down, so we can’t access everything in one place anymore. New things set in our favorite worlds are likewise locked down to certain services, and sometimes even to certain devices. And creators we like? Also locked into exclusive contracts at certain services.

And the thing is, we know from history that this isn’t what consumers want. We know from the ‘30s and ‘40s that this kind of vertical integration is not good for creativity or for audiences. We know from the recent past that convenient, reasonably-priced, and legal internet services are what users want and will use. So we very much know that this system is untenable and anticompetitive, that it can encourage copyright infringement and drives the growth of reactionary draconian copyright laws that hurt innovators and independent creators. We also know what works.

Antitrust enforcers back in the ‘30s and ‘40s recognized that a system like this should not exist and put a stop to it. Breaking the studios’ cartel in the ‘40s led to more independent movie theaters, more independent studios, and more creativity in movies in general. So why have we let this system regrow itself online?

Organizations Call on President Biden to Rescind President Trump’s Executive Order that Punished Online Social Media for Fact-Checking

EFF - Wed, 04/07/2021 - 2:01pm

President Joe Biden should rescind a dangerous and unconstitutional Executive Order issued by President Trump that continues to threaten internet users’ ability to obtain accurate and truthful information online, six organizations wrote in a letter sent to the president on Wednesday.

The organizations, Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, pressed Biden to remove his predecessor’s “Executive Order on Preventing Online Censorship” because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.”

The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit. (Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Trump.)

As the letter explains, Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230. From the letter:

His actions made clear that the full force of the federal government would be brought down on those whose speech he did not agree with, in an effort to coerce adoption of his own views and to prevent the dissemination of accurate information about voting.

As the letter notes, despite President Biden eliminating other Executive Orders issued by Trump, the order targeting online services remains active. Biden’s inaction is troubling because the Executive Order “threatens democracy and the voting rights of people who have been underrepresented for generations,” the letter states.

“Thus, your Administration is in the untenable position of defending an unconstitutional order that was issued with the clear purpose of chilling accurate information about the 2020 election and undermining public trust in the results,” the letter continues.

The letter concludes:

Eliminating this egregiously unconstitutional hold-over from the prior Administration vindicates the Constitution’s protections for both online services and the users who rely on them for accurate, truthful information about voting rights.

Related Cases: Rock the Vote v. Trump

India’s Strict Rules For Online Intermediaries Undermine Freedom of Expression

EFF - Wed, 04/07/2021 - 11:06am

India has introduced draconian changes to its rules for online intermediaries, tightening government control over the information ecosystem and what can be said online. It has created rules that seek to restrict social media companies and other content hosts from coming up with their own moderation policies, including those framed to comply with international human rights obligations. The new “Intermediary Guidelines and Digital Media Ethics Code” (2021 Rules) have already been used in an attempt to censor speech about the government. Within days of being published, the rules were used by a state in which the ruling Bharatiya Janata Party is in power to issue a legal notice to an online news platform that has been critical of the government. The legal notice was withdrawn almost immediately after public outcry, but served as a warning of how the rules can be used.

The 2021 Rules, ostensibly created to combat misinformation and illegal content, substantially revise India’s intermediary liability scheme. They were notified as rules under the Information Technology Act 2000, replacing the 2011 Intermediary Rules.

New Categories of Intermediaries

The 2021 Rules create two new subsets of intermediaries: “social media intermediaries” and “significant social media intermediaries,” the latter of which are subject to more onerous regulations. The due diligence requirements for these companies include having proactive speech monitoring, compliance personnel who reside in India, and the ability to trace and identify the originator of a post or message.

“Social media intermediaries” are defined broadly, as entities which primarily or solely “enable online interaction between two or more users and allow them to create, upload, share, disseminate, modify or access information using its services.” Obvious examples include Facebook, Twitter, and YouTube, but the definition could also include search engines and cloud service providers, which are not social media in a strict sense.

“Significant social media intermediaries” are those with registered users in India above a 5 million threshold. But the 2021 Rules also allow the government to deem any “intermediary” - including telecom and internet service providers, web-hosting services, and payment gateways - a ‘significant’ social media intermediary if it creates a “material risk of harm” to the sovereignty, integrity, and security of the state, friendly relations with Foreign States, or public order. For example, a private messaging app can be deemed “significant” if the government decides that the app allows the “transmission of information” in a way that could create a “material risk of harm.” The power to deem ordinary intermediaries as significant also encompasses ‘parts’ of services, which are “in the nature of an intermediary” - like Microsoft Teams and other messaging applications.

New  ‘Due Diligence’ Obligations

The 2021 Rules, like their predecessor 2011 Rules, enact a conditional immunity standard. They lay out an expanded list of due diligence obligations that intermediaries must comply with in order to avoid being held liable for content hosted on their platforms.

Intermediaries are required to incorporate content rules—designed by the Indian government itself—into their policies, terms of service, and user agreements. The 2011 Rules contained eight categories of speech that intermediaries must notify their users not to “host, display, upload, modify, publish, transmit, store, update or share.” These include content that violates Indian law, but also many vague categories that could lead to censorship of legitimate user speech. By complying with government-imposed restrictions, companies cannot live up to their responsibility to respect international human rights, in particular freedom of expression, in their daily business conduct. 

Strict Turnaround for Content Removal

The 2021 Rules require all intermediaries to remove restricted content within 36 hours of obtaining actual knowledge of its existence, taken to mean a court order or notification from a government agency. The law gives non-judicial government bodies great authority to compel intermediaries to take down restricted content. Platforms that disagree with or challenge government orders face penal consequences under the Information Technology Act and criminal law if they fail to comply.

The Rules impose strict turnaround timelines for responding to government orders and requests for data. Intermediaries must provide information within their control or possession, or ‘assistance,’ within 72 hours to government agencies for a broad range of purposes: verification of identity, or the prevention, detection, investigation, or prosecution of offenses or for cybersecurity incidents. In addition, intermediaries are required to remove or disable, within 24 hours of receiving a complaint, non-consensual sexually explicit material or material in the “nature of impersonation in an electronic form, including artificially morphed images of such individuals.” The deadlines do not provide sufficient time to assess complaints or government orders. To meet them, platforms will be compelled to use automated filter systems to identify and remove content. These error-prone systems can filter out legitimate speech and are a threat to users' rights to free speech and expression.

Failure to comply with these rules could lead to severe penalties, such as a jail term of up to seven years. In the past, the Indian government has threatened company executives with prosecution - as, for instance, when they served a legal notice on Twitter, asking the company to explain why recent territorial changes in the state of Kashmir were not reflected accurately on the platform’s services. The notice threatened to block Twitter or imprison its executives if a “satisfactory” explanation was not furnished. Similarly, the government threatened Twitter executives with imprisonment when they reinstated content about farmer protests that the government had ordered them to take down.

Additional Obligations for Significant Social Media Intermediaries

On a positive note, the Rules require significant social media intermediaries to have transparency and due process rules in place for content takedowns. Companies must notify users when their content is removed, explain why is was taken down, and provide an appeals process.

On the other hand, the 2021 Rules compel providers to appoint an Indian resident “Chief Compliance Officer,” who will be held personally liable in any proceedings relating to non-compliance with the rules, and a “Resident Grievance Officer” responsible for responding to users’ complaints and government and court orders. Companies must also appoint a resident employee to serve as a contact person for coordination with law enforcement agencies. With more executives residing in India, where they could face prosecution, intermediaries may find it difficult to challenge or resist arbitrary and disproportionate government orders.

Proactive Monitoring

Significant social media intermediaries are called on to “endeavour to deploy technology-based measures,” including automated tools or other mechanisms, to proactively identify certain types of content. This includes information depicting rape or child sexual abuse and content that has previously been removed for violating rules. The stringent provisions in Rule 2021 already encourage over-removal of content; requiring intermediaries to deploy automated filters will likely result in more takedowns.

Encryption and Traceability Requirements

The Indian government has been wrangling with messaging app companies—most famously WhatsApp—for several years now, demanding “traceability” of the originators of forwarded messages. The demand first emerged in the context of a series of mob lynchings in India, triggered by rumors that went viral on WhatsApp. Subsequently, petitions were filed in Indian courts seeking to link social networking accounts with the users’ biometric identity (Aadhar) numbers. Although the court ruled against the proposal, expert opinions supplied by a member of the Prime Minister’s scientific advisory committee suggested technical measures to enable traceability on end-to-end encrypted platforms.

Because of their privacy and security features, some messaging systems don’t learn or record the history of who first created particular content that was then forwarded by others, a state of affairs that the Indian government and others have found objectionable. The 2021 Rules represent a further escalation of this conflict, requiring private messaging intermediaries to “enable the identification of the first originator of the information” upon a court order or a decryption request issued under the 2009 Decryption Rules. (The Decryption Rules allow authorities to request the interception or monitoring of decryption of any information generated, transmitted, received ,or stored in any computer resource). If the first originator of a message is located outside the territory of India, the private messaging app will be compelled to identify the first originator of that information within India.

The 2021 Rules place various limitations on these court orders, namely they can only be issued for serious crimes. However, limitations will not solve the core problem with this proposal: A technical mandate for companies to reengineer or re-designing messaging services to comply with the government's demand to identify the originator of a message.

Conclusion

The 2021 Rules were fast-tracked without public consultation or a pre-legislative consultation, where the government seeks recommendations from stakeholders in a transparent process. They will have profound implications for the privacy and freedom of expression of Indian users. They restrict companies’ discretion in moderating their own platforms and create new possibilities for government surveillance of citizens. These rules threaten the idea of a free and open internet built on a bedrock of international human rights standards.

The EU Online Terrorism Regulation: a Bad Deal

EFF - Wed, 04/07/2021 - 3:00am

On 12 September 2018, the European Commission presented a proposal for a regulation on preventing the dissemination of terrorist content online—dubbed the Terrorism Regulation, or TERREG for short—that contained some alarming ideas. In particular, the proposal included an obligation for platforms to remove potentially terrorist  content within one hour, following an order from national competent authorities. 

Ideas such as this one have been around for some time already. In 2016, we first wrote about the European Commission’s attempt to create a voluntary agreement for companies to remove certain content (including terrorist expression) within 24 hours, and Germany’s Network Enforcement Act (NetzDG) requires the same. NetzDG has spawned dozens of copycats throughout the world, including in countries like Turkey with far fewer protections for speech, and human rights more generally.

Beyond the one hour removal requirement, the TERREG also contained a broad definition of what constitutes terrorist content as “material that incites or advocates committing terrorist offences, promotes the activities of a terrorist group or provides instructions and techniques for committing terrorist offences”.  

Furthermore, it introduced a duty of care for all platforms to avoid being misused for the dissemination of terrorist content. This includes the requirement of taking proactive measures to prevent the dissemination of such content. These rules were accompanied by a framework of cooperation and enforcement. 

These aspects of the TERREG are particularly concerning, as research we’ve conducted in collaboration with other groups demonstrates that companies routinely make content moderation errors that remove speech that parodies or pushes back against terrorism, or documents human rights violations in countries like Syria that are experiencing war.

TERREG and human rights

TERREG was created  without real consultation of free expression and human rights groups and has serious repercussions for online expression. Even worse, the proposal was adopted based on political spin rather than evidence

Notably, in 2019, the EU Fundamental Rights Agency—tasked with an opinion by the EU parliament—expressed concern about the regulation. In particular, the FRA noted that the definition of terrorist content had to be modified as it was too wide and would interfere with freedom of expression rights. Also, “According to the FRA, the proposal does not guarantee the involvement by the judiciary and the Member States' obligation to protect fundamental rights online has to be strengthened.” 

Together with many other civil society groups, we voiced our deep concern over the proposed legislation and stressed that the new rules would pose serious potential threats to fundamental rights of privacy, freedom of expression.

The message to EU policymakers was clear:

  • Abolish the one-hour time frame for content removal, which is too tight for platforms and will lead to over removal of content;
  • Respect the principles of territoriality and ensure access to justice in cases of cross-border takedowns by ensuring that only the Member State in which the hosting service provider has its legal establishment can issue removal orders;
  • Ensure due process and clarify that the legality of content be determined by a court or independent administrative authority;
  • Don’t impose the use of upload or re-upload filters (automated content recognition technologies) to services under the scope of the Regulation;
  • Exempt certain protected forms of expression, such as educational, artistic, journalistic, and research materials.

However, while responsible committees of the EU Parliament showed willingness to take the concerns of civil society groups into account, things looked more grim in Council, where government ministers from each EU country meet to discuss and adopt laws. During the closed-door negotiations between the EU-institutions to strike a deal, different versions of TERREG were discussed, which culminated in further letters by civil society groups, urging the lawmakers to ensure key safeguards on freedom of expressions and the rule of law.

Fortunately, civil society groups and fundamental rights-friendly MEPs in the Parliament were able to achieve some of their goals. For example, the agreement reached by the EU institutions includes exceptions for journalistic, artistic, and educational purposes. Another major improvement concerns the definition of terrorist content (now matching the narrower definition of the EU Directive on combating terrorism) and the option for host providers to invoke technical and operational reasons for non-complying with the strict one-hour removal obligation. And most importantly, the deal states that authorities cannot impose upload filters on platforms.

The Deal Is Still Not Good Enough

While civil society intervention has resulted in a series of significant improvements to the law, there is more work to be done. The proposed regulation still gives broad powers to national authorities, without judicial oversight, to censor online content that they deem to be “terrorism” anywhere in the EU, within a one-hour timeframe, and to incentivize companies to delete more content of their own volition. It further encourages the use of automated tools, without any guarantee of human oversight.

Now, a broad coalition of civil society organizations is voicing their concerns with the Parliament, which must agree to the deal for it to become law. EFF and others suggest that the Members of the European Parliament should vote against the adoption of the proposal. We encourage our followers to raise awareness about the implications of TERREG and reach out to their national members of the EU Parliament.

Victory for Fair Use: The Supreme Court Reverses the Federal Circuit in Oracle v. Google

EFF - Mon, 04/05/2021 - 8:34pm

In a win for innovation, the U.S. Supreme Court has held that Google’s use of certain Java Application Programming Interfaces (APIs) is a lawful fair use. In doing so, the Court reversed the previous rulings by the Federal Circuit and recognized that copyright only promotes innovation and creativity when it provides breathing room for those who are building on what has come before.

This decision gives more legal certainty to software developers’ common practice of using, re-using, and re-implementing software interfaces written by others, a custom that underlies most of the internet and personal computing technologies we use every day.

To briefly summarize over ten years of litigation: Oracle claims a copyright on the Java APIs—essentially names and formats for calling computer functions—and claims that Google infringed that copyright by using (reimplementing) certain Java APIs in the Android OS. When it created Android, Google wrote its own set of basic functions similar to Java (its own implementing code). But in order to allow developers to write their own programs for Android, Google used certain specifications of the Java APIs (sometimes called the “declaring code”).

APIs provide a common language that lets programs talk to each other. They also let programmers operate with a familiar interface, even on a competitive platform. It would strike at the heart of innovation and collaboration to declare them copyrightable.

EFF filed numerous amicus briefs in this case explaining why the APIs should not be copyrightable and why, in any event, it is not infringement to use them in the way Google did. As we’ve explained before, the two Federal Circuit opinions are a disaster for innovation in computer software. Its first decision—that APIs are entitled to copyright protection—ran contrary to the views of most other courts and the long-held expectations of computer scientists. Indeed, excluding APIs from copyright protection was essential to the development of modern computers and the internet.

Then the second decision made things worse. The Federal Circuit's first opinion had at least held that a jury should decide whether Google’s use of the Java APIs was fair, and in fact a jury did just that. But Oracle appealed again, and in 2018 the same three Federal Circuit judges reversed the jury's verdict and held that Google had not engaged in fair use as a matter of law.

Fortunately, the Supreme Court agreed to review the case. In a 6-2 decision, Justice Breyer explained why Google’s use of the Java APIs was a fair use as a matter of law. First, the Court discussed some basic principles of the fair use doctrine, writing that fair use “permits courts to avoid rigid application of the copyright statute when, on occasion, it would stifle the very creativity which that law is designed to foster.”

Furthermore, the court stated:

Fair use “can play an important role in determining the lawful scope of a computer program copyright . . . It can help to distinguish among technologies. It can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms in other markets or to the development of other products.”

In doing so, the decision underlined the real purpose of copyright: to incentivize innovation and creativity. When copyright does the opposite, fair use provides an important safety valve.

Justice Breyer then turned to the specific fair use statutory factors. Appropriately for a functional software copyright case, he first discussed the nature of the copyrighted work. The Java APIs are a “user interface” that allow users (here the developers of Android applications) to “manipulate and control” task-performing computer programs. The Court observed that the declaring code of the Java APIs differs from other kinds of copyrightable computer code—it’s “inextricably bound together” with uncopyrightable features, such as a system of computer tasks and their organization and the use of specific programming commands (the Java “method calls”). As the Court noted:

Unlike many other programs, its value in significant part derives from the value that those who do not hold copyrights, namely, computer programmers, invest of their own time and effort to learn the API’s system. And unlike many other programs, its value lies in its efforts to encourage programmers to learn and to use that system so that they will use (and continue to use) Sun-related implementing programs that Google did not copy.

Thus, since the declaring code is “further than are most computer programs (such as the implementing code) from the core of copyright,” this factor favored fair use.

Justice Breyer then discussed the purpose and character of the use. Here, the opinion shed some important light on when a use is “transformative” in the context of functional aspects of computer software, creating something new rather than simply taking the place of the original. Although Google copied parts of the Java API “precisely,” Google did so to create products fulfilling new purposes and to offer programmers “a highly creative and innovative tool” for smartphone development. Such use “was consistent with that creative ‘progress’ that is the basic constitutional objective of copyright itself.”

The Court discussed “the numerous ways in which reimplementing an interface can further the development of computer programs,” such as allowing different programs to speak to each other and letting programmers continue to use their acquired skills. The jury also heard that reuse of APIs is common industry practice. Thus, the opinion concluded that the “purpose and character” of Google’s copying was transformative, so the first factor favored fair use.

Next, the Court considered the third fair use factor, the amount and substantiality of the portion used. As a factual matter in this case, the 11,500 lines of declaring code that Google used were less than one percent of the total Java SE program. And even the declaring code that Google used was to permit programmers to utilize their knowledge and experience working with the Java APIs to write new programs for Android smartphones. Since the amount of copying was “tethered” to a valid and transformative purpose, the “substantiality” factor favored fair use.

Finally, several reasons led Justice Breyer to conclude that the fourth factor, market effects, favored Google. Independent of Android’s introduction in the marketplace, Sun didn’t have the ability to build a viable smartphone. And any sources of Sun’s lost revenue were a result of the investment by third parties (programmers) in learning and using Java. Thus, “given programmers’ investment in learning the Sun Java API, to allow enforcement of Oracle’s copyright here would risk harm to the public. Given the costs and difficulties of producing alternative APIs with similar appeal to programmers, allowing enforcement here would make of the Sun Java API’s declaring code a lock limiting the future creativity of new programs.” This “lock” would interfere with copyright’s basic objectives.

The Court concluded that “where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.”

The Supreme Court left for another day the issue of whether functional aspects of computer software are copyrightable in the first place. Nevertheless, we are pleased that the Court recognized the overall importance of fair use in software cases, and the public interest in allowing programmers, developers, and other users to continue to use their acquired knowledge and experience with software interfaces in subsequent platforms.

Related Cases: Oracle v. Google

553,000,000 Reasons Not to Let Facebook Make Decisions About Your Privacy

EFF - Mon, 04/05/2021 - 8:03pm

Another day, another horrific Facebook privacy scandal. We know what comes next: Facebook will argue that losing a lot of our data means bad third-party actors are the real problem that we should trust Facebook to make more decisions about our data to protect against them. If history is any indication, that’ll work. But if we finally wise up, we’ll respond to this latest crisis with serious action: passing America’s long-overdue federal privacy law (with a private right of action) and forcing interoperability on Facebook so that its user/hostages can escape its walled garden.

Facebook created this problem, but that doesn’t make the company qualified to fix it, nor does it mean we should trust them to do so. 

In January 2021, Motherboard reported on a bot that was selling records from a 500 million-plus person trove of Facebook  data, offering phone numbers and other personal information. Facebook said the data had been scraped by using a bug that was available as early as 2016, and which the company claimed to have patched in 2019. Last week, a dataset containing 553 million Facebook users’ data—including phone numbers, full names, locations, email addresses, and biographical informationwas published for free online. (It appears this is the same dataset Motherboard reported on in January). More than half a billion current and former Facebook users are now at high risk of various kinds of fraud.

While this breach is especially ghastly, it’s also just another scandal for Facebook, a company that spent decades pursuing deceptive and anticompetitive tactics to amass largely nonconsensual dossiers on its 2.6 billion users as well as many billions of people who have no Facebook, Instagram or WhatsApp account, including many who never had an account with Facebook.

Based on past experience, Facebook’s next move is all but inevitable: after regretting this irretrievable data breach, the company will double down on the tactics that lock its users into its walled gardens, in the name of defending their privacy. That’s exactly what the company did during the Cambridge Analytica fiasco, when it used the pretense of protecting users from dangerous third-parties to lock out competitors, including those who use Facebook’s APIs to help users part ways with the service without losing touch with their friends, families, communities, and professional networks.

According to Facebook, the data in this half-billion-person breach was harvested thanks to a bug in its code. We get that. Bugs happen. That’s why we’re totally unapologetic about defending the rights of security researchers and other bug-hunters who help discover and fix those bugs. The problem isn’t that a Facebook programmer made a mistake: the problem is that this mistake was so consequential.

Facebook doesn’t need all this data to offer its users a social networking experience: it needs that data so it can market itself to advertisers, who paid the company $84.1 billion in 2020. It warehoused that data for its own benefit, in full knowledge that bugs happen, and that a bug could expose all of that data, permanently. 

Given all that, why do users stay on Facebook? For many, it’s a hostage situation: their friends, families, communities, and professional networks are on Facebook, so that’s where they have to be. Meanwhile, those friends, family members, communities, and professional networks are stuck on Facebook because their friends are there, too. Deleting Facebook comes at a very high cost.

It doesn’t have to be this way. Historically, new online services—including, at one time, Facebook—have smashed big companies’ walled gardens, allowing those former user-hostages to escape from dominant services but still exchange messages with the communities they left behind, using techniques like scraping, bots, and other honorable tools of reverse-engineering freedom fighters. 

Facebook has gone to extreme lengths to keep this from ever happening to its services. Not only has it sued rivals who gave its users the ability to communicate with their Facebook friends without subjecting themselves to Facebook’s surveillance, the company also bought out successful upstart rivals specifically because it knew it was losing users to them. It’s a winning combination: use the law to prevent rivals from giving users more control over their privacy, use the monopoly rents those locked-in users generate to buy out anyone who tries to compete with you.

Those 553,000,000 users whose lives are now an eternal open book to the whole internet never had a chance. Facebook took them hostage. It harvested their data. It bought out the services they preferred over Facebook. 

And now that 553,000,000 people should be very, very angry at Facebook, we need to watch carefully to make sure that the company doesn’t capitalize on their anger by further increasing its advantage. As governments from the EU to the U.S. to the UK consider proposals to force Facebook to open up to rivals so that users can leave Facebook without shattering their social connections, Facebook will doubtless argue that such a move will make it impossible for Facebook to prevent the next breach of this type.

Facebook is also likely to weaponize this breach in its ongoing war against accountability: namely, against a scrappy group of academics and Facebook users. Ad Observer and Ad Observatory are a pair of projects from NYU’s Online Transparency Project that scrape the ads its volunteers are served by Facebook and places them in a public repository, where scholars, researchers, and journalists can track how badly Facebook is living up to its promise to halt paid political disinformation.

Facebook argues that any scraping—ven highly targeted, careful, publicly auditable scraping that holds the company to account—is an invitation to indiscriminate mass-scraping of the sort that compromised the half-billion-plus users in the current breach. Instead of scraping its ads, the company says that its critics should rely on a repository that Facebook itself provides, and trust that the company will voluntarily reveal any breaches of its own policies.

From Facebook’s point of view, a half-billion person breach is a half-billion excuses not to open its walled garden or permit accountability research into its policies. In fact, the worse the breach, the more latitude Facebook will argue it should get: “If this is what happens when we’re not being forced to allow competitors and critics to interoperate with our system, imagine what will happen if these digital trustbusters get their way!”

Don’t be fooled. Privacy does not come from monopoly. No one came down off a mountain with two stone tablets, intoning “Thou must gather and retain as much user data as is technologically feasible!” The decision to gobble up all this data and keep it around forever has very little to do with making Facebook a nice place to chat with your friends and everything to do with maximizing the company’s profits. 

Facebook’s data breach problems  are the inevitable result of monopoly, in particular the knowledge that it can heap endless abuses on its users and retain them. Even if they resign from Facebook, they’re going to end up on acquired Facebook subsidiaries like Instagram or WhatsApp, and even if they don’t, Facebook will still get to maintain its dossiers on their digital lives.

Facebook’s breaches are proof that we shouldn’t trust Facebook—not that we should trust it more. Creating a problem in no way qualifies you to solve that problem. As we argued in our January white-paper, Privacy Without Monopoly: Data Protection and Interoperability, the right way to protect users is with a federal privacy law with a private right of action.

Right now, Facebook’s users have to rely on Facebook to safeguard their interests. That doesn’t just mean crossing their fingers and hoping Facebook won’t make another half-billion-user blunder—it also means hoping that Facebook won’t intentionally disclose their information to a third party as part of its normal advertising activities. 

Facebook is not qualified to decide what the limits on its own data-processing should be. Those limits should come from democratically accountable legislatures, not autocratic billionaire CEOs. America is sorely lacking a federal privacy law, particularly one that empowers internet users to sue companies that violate their privacy. A privacy law with a private right of action would mean that you wouldn’t be hostage to the self-interested privacy decisions of vast corporations, and it would mean that when they did you dirty, you could get justice on your own, without having to convince a District Attorney or Attorney General to go to bat for you.

A federal privacy law with a private right of action would open a vast possible universe of new interoperable services that plugged into companies like Facebook, allowing users to leave without cancelling their lives; these new services would have to play by the federal privacy rules, too.

That’s not what we’re going to hear from Facebook, though: in Facebookland, the answer to their abuse of our trust is to give them more of our trust; the answer to the existential crisis of their massive scale is to make them even bigger. Facebook created this problem, and they are absolutely incapable of solving it.

First Circuit Upholds First Amendment Right to Secretly Audio Record the Police

EFF - Mon, 04/05/2021 - 5:52pm

EFF applauds the U.S. Court of Appeals for the First Circuit for holding that the First Amendment protects individuals when they secretly audio record on-duty police officers. EFF filed an amicus brief in the case, Martin v. Rollins, which was brought by the ACLU of Massachusetts on behalf of two civil rights activists. This is a victory for people within the jurisdiction of the First Circuit (Massachusetts, Maine, New Hampshire, Puerto Rico and Rhode Island) who want to record an interaction with police officers without exposing themselves to possible reprisals for visibly recording.

The First Circuit struck down as unconstitutional the Massachusetts anti-eavesdropping (or wiretapping) statute to the extent it prohibits the secret audio recording of police officers performing their official duties in public. The law generally makes it a crime to secretly audio record all conversations without consent, even where participants have no reasonable expectation of privacy, making the Massachusetts statute unique among the states.

The First Circuit had previously held in Glik v. Cunniffe (2011) that the plaintiff had a First Amendment right to record police officers arresting another man in Boston Common. Glik had used his cell phone to openly record both audio and video of the incident. The court had held that the audio recording did not violate the Massachusetts anti-eavesdropping statute’s prohibition on secret recording because Glik’s cell phone was visible to officers.

Thus, following Glik, the question remained open as to whether individuals have a First Amendment right to secretly audio record police officers, or if instead they could be punished under the Massachusetts statute for doing so. (A few years after Glik, in Gericke v. Begin (2014), the First Circuit held that the plaintiff had a First Amendment right to openly record the police during someone else’s traffic stop to the extent she wasn’t interfering with them.)

The First Circuit in Martin held that recording on-duty police officers, even secretly, is protected newsgathering activity similar to that of professional reporters that “serve[s] the very same interest in promoting public awareness of the conduct of law enforcement—with all the accountability that the provision of such information promotes.” The court further explained that recording “play[s] a critical role in informing the public about how the police are conducting themselves, whether by documenting their heroism, dispelling claims of their misconduct, or facilitating the public’s ability to hold them to account for their wrongdoing.”

The ability to secretly audio record on-duty police officers is especially important given that many officers retaliate against civilians who openly record them, as happened in a recent Tenth Circuit case. The First Circuit agreed with the Martin plaintiffs that secret recording can be a “better tool” to gather information about police officers, because officers are less likely to be disrupted and, more importantly, secret recording may be the only way to ensure that recording “occurs at all.” The court stated that “the undisputed record supports the Martin Plaintiffs’ concern that open recording puts them at risk of physical harm and retaliation.”

Finally, the court was not persuaded that the privacy interests of civilians who speak with or near police officers are burdened by secretly audio recording on-duty police officers. The court reasoned that “an individual’s privacy interests are hardly at their zenith in speaking audibly in a public space within earshot of a police officer.”

Given the critical importance of recordings for police accountability, the First Amendment right to record police officers exercising their official duties has been recognized by a growing number of federal jurisdictions. In addition to the First Circuit, federal appellate courts in the Third, Fifth, Seventh, Ninth, and Eleventh Circuits have directly upheld this right.

Disappointingly, the Tenth Circuit recently dodged the question. For all the reasons in the First Circuit’s Martin decision, the Tenth Circuit erred, and the remaining circuits must recognize the First Amendment right to record on-duty police officers as the law of the land.

Maine Should Take this Chance to Defund the Local Intelligence Fusion Center

EFF - Fri, 04/02/2021 - 2:18pm

Maine state representative Charlotte Warren has introduced LD1278 (HP938), or An Act To End the Maine Information and Analysis Center Program, a bill that would defund the Maine Information and Analysis Center (MIAC), also known as Maine’s only fusion center. EFF is happy to support this bill in hopes of defunding an unnecessary, intrusive, and often-harmful piece of the U.S. surveillance regime. You can read the full text of the bill here

Fusion centers are yet another unnecessary cog in the surveillance state—and one that serves the intrusive function of coordinating surveillance activities and sharing information between federal law enforcement, the national security surveillance apparatus, and local and state police. Across the United States, there are at least 78 fusion centers that were formed by the Department of Homeland Security in the wake of the war on terror and the rise of post-9/11 mass surveillance. Since their creation, fusion centers have been hammered by politicians, academics, and civil society groups for their ineffectiveness, dysfunction, mission creep, and unregulated tendency to veer into political policing. As scholar Brendan McQuade wrote in his book Pacifying the Homeland: Intelligence Fusion and Mass Supervision,

“On paper, fusion centers have the potential to organize dramatic surveillance powers. In practice however, what happens at fusion centers is circumscribed by the politics of law enforcement. The tremendous resources being invested in counterterrorism and the formation of interagency intelligence centers are complicated by organization complexity and jurisdictional rivalries. The result is not a revolutionary shift in policing but the creation of uneven, conflictive, and often dysfunctional intelligence-sharing systems.”

But in recent months, the dysfunction of fusion centers and the ease with which they sink into policing First Amendment-protected activities have been on full display. After a series of leaks that revealed communications from inside police departments, fusion centers, and law enforcement agencies across the country, MIAC came under particular scrutiny for sharing dubious intelligence generated by far-right wing social media accounts with local law enforcement. Specifically, the Maine fusion center helped perpetuate disinformation that stacks of bricks and stones had been strategically placed throughout a Black Lives Matter protest as part of a larger plan for destruction, and caused police to plan and act accordingly. This was, to put it plainly, a government intelligence agency spreading fake news that could have deliberately gotten people exercising their First Amendment rights hurt. This is in addition to a whistleblower lawsuit from a state trooper that alleged the fusion center routinely violated civil rights.  

The first decade of the twenty-first century is characterized by a blank check to grow and expand the infrastructure that props up mass surveillance. Fusion centers are at the very heart of that excess. They have proven themselves to be unreliable and even harmful to the people the national security apparatus claims to want to protect. Why do states continue to fund intelligence fusion when, at its best, it enacts political policing that poses an existential threat to immigrants, activists, and protestors—and at worst, it actively disseminates false information to police? 

We echo the sentiments of Representative Charlotte Warren and other dedicated Maine residents who say it's time to shift MIAC's nearly million-dollar per year budget towards more useful programs. Maine, pass LD1278 and defund the Maine Information and Analysis Center. 



Ethos Capital Is Grabbing Power Over Domain Names Again, Risking Censorship-For-Profit. Will ICANN Intervene?

EFF - Fri, 04/02/2021 - 1:15am

Ethos Capital is at it again. In 2019, this secretive private equity firm that includes insiders from the domain name industry tried to buy the nonprofit that runs the .ORG domain. A huge coalition of nonprofits and users spoke out. Governments expressed alarm, and ICANN (the entity in charge of the internet’s domain name system) scuttled the sale. Now Ethos is buying a controlling stake in Donuts, the largest operator of “new generic top-level domains.” Donuts controls a large swathe of the domain name space. And through a recent acquisition, it also runs the technical operations of the .ORG domain. This acquisition raises the threat of increased censorship-for-profit: suspending or transferring domain names against the wishes of the user at the request of powerful corporations or governments. That’s why we’re asking the ICANN Board to demand changes to Donuts’ registry contracts to protect its users’ speech rights.

Donuts is big. It operates about 240 top-level domains, including .charity, .community, .fund, .healthcare, .news, .republican, and .university. And last year it bought Afilias, another registry company that also runs the technical operations of the .ORG domain. Donuts already has questionable practices when it comes to safeguarding its users’ speech rights. Its contracts with ICANN contain unusual provisions that give Donuts an unreviewable and effectively unlimited right to suspend domain names—causing websites and other internet services to disappear.

Relying on those contracts, Donuts has cozied up to powerful corporate interests at the expense of its users. In 2016, Donuts made an agreement with the Motion Picture Association to suspend domain names of websites that MPA accused of copyright infringement, without any court process or right of appeal. These suspensions happen without transparency: Donuts and MPA haven’t even disclosed the number of domains that have been suspended through their agreement since 2017.

Donuts also gives trademark holders the ability to pay to block the registration of domain names across all of Donuts’ top-level domains. In effect, this lets trademark holders “own” words and prevent others from using them as domain names, even in top-level domains that have nothing to do with the products or services for which a trademark is used. It’s a legal entitlement that isn’t part of any country’s trademark law, and it was considered and rejected by ICANN’s multistakeholder policy-making community.

These practices could accelerate and expand with Ethos Capital at the helm. As we learned last year during the fight for .ORG, Ethos expects to deliver high returns to its investors while preserving its ability to change the rules for domain name registrants, potentially in harmful ways. Ethos refused meaningful dialogue with domain name users, instead proposing an illusion of public oversight and promoting it with a slick public relations campaign. And private equity investors have a sordid record of buying up vital institutions like hospitals, burdening them with debt, and leaving them financially shaky or even insolvent.

Although Ethos’s purchase of Donuts appears to have been approved by regulators, ICANN should still intervene. Like all registry operators, Donuts has contracts with ICANN that allow it to run the registry databases for its domains. ICANN should give this acquisition as much scrutiny as it gave Ethos’s attempt to buy .ORG. And to prevent Ethos and Donuts from selling censorship as a service at the expense of domain name users, ICANN should insist on removing the broad grants of censorship power from Donuts’ registry contracts. ICANN did the right thing last year when confronted with the takeover of .ORG. We hope it does the right thing again by reining in Ethos and Donuts.

 

Pages