IPANDETEC’s Report on Panama’s ISPs Show Improvements But More Work Needed to Protect Users’ Privacy
IPANDETEC, the leading digital rights organization in Panama, today released its second annual Who Defends Your Data" (¿Quién Defiende Tus Datos?) report assessing how well the country’s mobile phone and Internet service providers (ISPs) are protecting users' communications data. While most companies received low scores, the report shows some ISPs making progress in a few important areas: ensuring payment processing services and websites are secure, requiring law enforcement to obtain warrants before accessing user data, and publicly promoting data privacy as a human right. Regarding the latter, all ISPs surveyed are working on an agreement to provide Internet connection to students and persons affected by the COVID-19, a welcome development as many are struggling without Internet access during the pandemic.
IPANDETEC looked at the privacy practices of Panama’s main mobile companies: Claro (America Movil), Digicel, Más Móvil (a joint operation between Cable & Wireless Communications and the Panamanian government, which owns 49% of the company), and Tigo, the new name for Movistar, the brand owned by Spain’s Telefonica whose assets were sold to Millicom International last year.
¿Quién Defiende Tus Datos? is modeled after EFF’s Who Has Your Back report, which was created to shine a light on U.S. ISPs’ policies for protecting users’ private information so consumers could make informed choices about what companies they should entrust their data to. Internet access and digital communications are part of everyday life for most people, and the companies that provide these services collect and store vast amounts of private information from their customers. People have a right to know if and how their data is being protected—that’s why IPANDETEC and other digital rights organizations across Latin America and Spain are evaluating and reporting on what ISPs publicly disclose about their data protection practices.
ISPs in Panama were evaluated on seven criteria concerning data protection, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. Complete descriptions of what the categories include are provided later in this post.
Tigo, previously called Movistar, scored the highest, achieving full or partial stars in five of the seven categories assessed. It was the only company in the survey to receive a full star for stating that it requires law enforcement agencies seeking user data to first obtain a warrant. Tigo was also the only company to receive some credit for providing partial information about procedures for law enforcement requests for customer data—this is largely owing to the fact that its current parent company Millicom publishes a policy for assisting law enforcement. But the document refers to a global policy; Tigo’s local policy in Panama isn’t clear, so it received a quarter of a star.
Tigo was also the only company to receive partial credit in the data protection policy category. The other companies provide some information about data collection from visits to their websites and use of their apps, but not about data collected from their regular Internet or mobile phone services. Más Móvil says its contracts with customers provide information about privacy and data protection. But these contracts aren’t made public. How companies collect, use, share, and manage customers' personal data should be publicly disclosed so it's available to people before they choose a telecom operator. Tigo, through Millicom, discloses only some information about data collection policies for online services and received a quarter of a star.
Claro had the second-highest score, with one full star in the digital security category and half stars in the defense of human rights and judicial order categories. In the latter category, the company’s global policy is to only comply with law enforcement requests for users’ content and metadata when there’s an order from “the competent authority.” The global policy isn’t available on Claro’s local Panama business website, and Claro’s policy for Panama is less precise about a warrant requirement, hence the awarding of a half star.
Claro received a full star in the digital security category, an improvement over last year, by committing to using HTTPS on its website and for processing online payments. A big problem revealed by the report is a general lack of transparency about privacy and security practices by ISPs in Panama. None of the ISPs surveyed received credit for publishing a transparency report. Tigo’s previous and current parent companies, Telefonica and Millicom, respectively, didn’t include information about their mobile Panamanian businesses in their transparency reports because the Movistar sale transaction was in progress. As such, Tigo received no stars in the transparency report category. We hope to see that change in the next report, not just for Tigo but for the other companies as well.
The lack of transparency reports isn’t the only disclosure flaw among Panama’s leading ISP’s. None commit to notifying users when the government gets access to their data, according to IPANDETEC's study.
The specific criteria for each category and the final results of the study are below. For more information on each company and Panama’s ICT sector, you can find the full report on IPANDETEC’s website. [add link of the report - partners still have to send it]
Data Protection: Does the company post a document detailing its collection, use, disclosure, and management of personal customer data?
- The data protection policy is published on its website
- The policy is written in clear and easily accessible language
- The policy details what data is collected
- The policy establishes the retention period for user data
Transparency: Does the company post an annual transparency report listing the number of government requests for customer data they’ve received, and how many were accepted and rejected?
- The company publishes a transparency report on its website
- The report is written in clear and easily accessible language
- The reports contain data related to the number and type of requests received, and how many were accepted
User Notification: Does the company promise to notify users when the government requests their data?
- The company states it will notify users when the government accesses their information as soon as the law allows
Judicial Authorization: Does the company explicitly state it will only comply with authorities’ request for user data if they have a warrant?
- The company states in its policies that it requires a warrant before law enforcement can access the content of users' communications
- The company rejects requests by law enforcement that violate legal requirements
Defense of Human Rights: Does the company publicly promote and defend the human rights of their users, specifically the privacy of their communications and protection of their personal data?
- The company promotes user privacy and data protection through campaigns or initiatives
- The company supports legislation, impact litigation, or programs favoring user privacy and data protection
- The company participates in cross-sector agreements promoting Human Rights as a core tenant of their business
Digital Security: Are the company’s website and online payment service secure?
- The company uses HTTPS on its website
- The company uses HTTPS when processing payments online
Law Enforcement Guidelines: Does the company outline public guidelines and legal requirements required for law enforcement requesting customer data?
- The company publishes guidelines for law enforcement data requests.
The report shows that all four ISPs surveyed support the idea that user privacy and data protection are human rights. The best way for companies to prove their commitment to this principle is by doing a better job at protecting their customers’ private information and being more transparent about how they collect, use, and share their data. We hope to see improvements across all categories in the next report.
The Senate Commerce Committee met this week to question the heads of Facebook, Twitter, and Google about Section 230, the most important law protecting free speech online. Section 230 reflects the common-sense principle that legal liability for unlawful online speech should rest with the speaker, not the Internet services that make online speech possible. Section 230 further protects Internet companies’ ability to make speech moderation decisions by making it clear that platforms can make those decisions without inviting liability for the mistakes they will inevitably make.
Even President Trump has called multiple times for a repeal of Section 230, though repealing the law would certainly mean far fewer places for conservatives to share their ideas online, not more.
Section 230’s importance to free speech online can’t be overstated. Without Section 230, social media wouldn’t exist, at least in its current form. Neither would most forums, comment sections, or other places where people are allowed to interact and share their ideas with each other. The legal risk involved with operating such a space would simply be too high. Simply put, the Internet would be a less interactive, more restrictive place without Section 230.
If some critics of Section 230 get their way, the Internet will soon become a more restrictive place. Section 230 has become a lightning rod this year: it’s convenient for politicians and commentators on both sides of the aisle to blame the law—which protects a huge range of Internet services—for the decisions of a few very large companies (usually Google, Twitter, and Facebook). Republicans make questionable claims about bias against conservatives, arguing that platforms should be required to moderate less speech in order to maintain liability protections. Even President Trump has called multiple times for a repeal of Section 230, though repealing the law would certainly mean far fewer places for conservatives to share their ideas online, not more. Democrats often argue the opposite, saying that the law should be changed to require platforms to do more to patrol hate speech and disinformation.
The questions and comments in this week’s hearing followed that familiar pattern, with Republicans scolding Big Tech for “censoring” and fact-checking conservative speech—the president’s in particular—and Democrats demanding that tech companies do more to curb misleading and harmful statements on their platforms—the president’s in particular. Lost in the charade is a stark reality: undermining 230 wouldn’t necessarily improve platforms’ behavior but would bring severe consequences for speech online as a whole.Three Internet Giants Don’t Speak for the Internet
One of the problems with the current debate over Section 230 is that it’s treated as a discussion about big tech companies. But Section 230 affects all of us. When volunteer moderators take action in small web discussion forums—they’re protected by Section 230. When a hobbyist blogger removes spam or other inappropriate material from a comment section—again, protected by Section 230. And when an Internet user retweets a tweet or even forwards an email, Section 230 protects that user too.
Internet censorship invariably harms the least powerful members of society first.
There were a lot of disappointing mistakes made in this hearing, but the first and worst mistake was deciding who to call to speak in the first place. Hauling the CEOs of Facebook, Google, and Twitter in front of Congress to be the sole witnesses defending a law that affects the entire Internet may be good political theatre, but it's a terrible way to make technology policy. By definition, the three largest tech companies alone can’t provide a clear picture of what rewriting Section 230 would mean to the entire Internet.
All three of these CEOs run companies that will be able to manage nearly any level of government regulation or intervention. That’s not true of the thousands of small startups that don’t have the resources of world-spanning companies. It’s no surprise that Facebook, for instance, is now signaling an openness to more government regulation of the Internet—far from being a death blow for the company, it may well help the social media giant isolate itself from competition.
Dorsey, Zuckerberg, and Pichai don’t just fail to represent the breadth of Internet companies that would be affected by a change to Section 230; as three men in positions of enormous power, they also fail to represent the Internet users that Congress would harm. Internet censorship invariably harms the least powerful members of society first—the rural LGBTQ teenager who depends on the acceptance of Internet communities, the activist using social media to document government abuses of human rights. When online platforms clamp down on their users’ speech, it’s the marginalized voices that disappear first.Facebook’s Call for Regulation Doesn’t Address the Big Problems
In his opening testimony, Facebook CEO Mark Zuckerberg made it clear why we can’t trust him to be a witness on Section 230, endorsing changes to Section 230 that would give Facebook an advantage over would-be competitors. Zuckerberg acknowledged the foundational role that Section 230 has played in the development of the Internet, but he also suggested that “Congress should update the law to make sure that it’s working as intended.” He went on to propose that Congress should pass laws requiring platforms to be more transparent in their moderation decisions. He also advocated legislation to “separate good actors from bad actors.” From Zuckerberg’s written testimony (PDF):
At Facebook, we don’t think tech companies should be making so many decisions about these important issues alone. I believe we need a more active role for governments and regulators, which is why in March last year I called for regulation on harmful content, privacy, elections, and data portability.
EFF recognizes some of what Zuckerberg identifies as the problems with today’s social media platforms—indeed, we have criticized Facebook for providing inadequate transparency into its own moderation processes—but that doesn’t mean Facebook should speak for the Internet. Any reform that puts more burden on platforms in order to maintain the Section 230 liability shield will mean a higher barrier for new startups to overcome before they can compete with Facebook or Google, be it in the form of moderation staff, investment in algorithmic filters, or the high-powered lawyers necessary to fight off new lawsuits. If it becomes harder for platforms to receive and maintain Section 230 protections, then Google, Facebook, and Twitter will become even more dominant over the Internet. This conglomeration happened very dramatically after Congress passed SESTA/FOSTA. The law, which Facebook supported, made it much harder for online dating services to operate without massive legal risk, and multiple small dating sites recognized that reality and immediately shut down. Just a few weeks later, Facebook announced that it was entering the online dating market.
Unsurprisingly, while the witnesses paid lip service to the importance of competition among online services, none of them proposed solutions to their outsized control over online speech. None of them proposed comprehensive data privacy legislation that would let users sue the big tech companies when they violate our privacy rights. None of them proposed modernizing antitrust law to stop the familiar pattern of big tech companies simply buying their would-be competitors. Many members of Congress—Republican and Democrat—recognized that the failings they blamed on large tech companies were magnified by the lack of meaningful competition among online speech platforms, but unfortunately, very little of the hearing focused on real solutions to that lack of competition.Give Users Real Control Over Their Online Experience
Republican Senators’ insistence on framing the hearing around “censorship of conservatives” precluded a more serious discussion of the problems with online platforms’ aggressive speech enforcement practices. As we’ve said before, online censorship magnifies existing imbalances in society: it’s not the liberal silencing the conservative; it’s the powerful silencing the powerless. Any serious discussion of the problems with platforms’ moderation practices must consider the ways in which powerful players—including state actors—take advantage of those moderation practices in order to censor their enemies.
EFF told Congress in 2018 that although it’s not lawmakers’ place to tell Internet companies what speech to keep up or take down, there are serious problems with how social media platforms enforce their policies and what voices disappear from the Internet. We said that Internet companies had a lot more work to do to ensure that their moderation decisions were fair and transparent and that when mistakes inevitably happened, users would be able to appeal moderation decisions to a real person, not an algorithm.
Social media companies make thousands of decisions on our behalf. Why not expose those decisions to users and let us have some say in them?
We also suggested that platforms ought to put more control over what speech we’re allowed to see into the hands of us, the users. Today, social media companies make thousands of decisions on our behalf—about whether we’ll see sexual speech and imagery, whether we’ll see certain types of misinformation, what types of extremist speech we’ll be exposed to. Why not expose those decisions to users and let us have some say in them? As we wrote in our letter to the House Judiciary Committee, “Facebook already allows users to choose what kinds of ads they want to see—a similar system should be put in place for content, along with tools that let users make those decisions on the fly rather than having to find a hidden interface.”
We were gratified, then, that Twitter’s Jack Dorsey identified “empowering algorithmic choice” as a high priority for the company. From Dorsey’s written testimony (PDF):
In December 2018, Twitter introduced an icon located at the top of everyone’s timeline that allows individuals using Twitter to easily switch to a reverse chronological order ranking of the Tweets from accounts or topics they follow. This improvement gives people more control over the content they see, and it also provides greater transparency into how our algorithms affect what they see. It is a good start. We believe this points to an exciting, market-driven approach where people can choose what algorithms filter their content so they can have the experience they want. [...] Enabling people to choose algorithms created by third parties to rank and filter their content is an incredibly energizing idea that is in reach.
We hope to see more of that logical approach to empowering users from a major Internet company, but we also need to hold Twitter and its peers accountable to it. As Dorsey noted in its testimony, true algorithmic choice would mean not only letting people customize how Twitter filters their news feed, but letting third parties create alternative filters too. Unfortunately, the large Internet companies have a poor track record when it comes to letting their services play well with others, employing a mix of copyright abuse, end-user license agreements, and computer crime laws to prevent third parties from creating tools that interact with their own. While we applaud Twitter’s interest in giving users more control over their timeline, we must ensure that its solutions actually promote a competitive landscape and don’t further lock users into Twitter’s own walled garden.
While Internet companies’ moderation practices are a problem, the Senate Commerce Committee hearing ultimately failed to address the real problems with speech moderation at scale. While members of Congress acknowledge that a few flawed companies have too much control over online speech, they’ve consistently failed to enact structural solutions to the lack of meaningful competition in social media. Before they consider making further changes to Section 230, lawmakers must understand that the decisions they make could spell disaster—not just for Facebook and Twitter’s would-be competitors but, most importantly, for the voices they risk kicking offline.
No matter what happens in next week’s election, we are certain to see more misguided attempts to undermine Section 230 in the next Congress. We hope that Congress will take the time to listen to experts and users that don’t run behemoth Internet companies. In the meantime, please take a moment to write to your members of Congress and tell them why a strong Section 230 is important to you.
Have you tried modifying, repairing, or diagnosing a product but bumped into encryption, a password requirement, or some other technological roadblock that got in the way? EFF wants your stories to help us fight for your right to get around those obstacles.
Section 1201 of the Digital Millennium Copyright Act (DMCA) makes it illegal to circumvent certain digital access controls (also called “technological protection measures” or “TPMs”). Because software code can be copyrightable, this gives product manufacturers a legal tool to control the way you interact with the increasingly powerful devices in your life. While Section 1201’s stated goal was to prevent copyright infringement, the law has been used against artists, researchers, technicians, and other product owners, even when their reasons for circumventing manufacturers’ digital locks were completely lawful.
Every three years, there is a window of opportunity to get exemptions to this law to protect legitimate uses of copyrighted works. Last time around, we were able to preserve your right to repair, maintain, and diagnose your smartphones, home appliances, and home systems. For 2021, we’re asking the Copyright Office to expand that exemption to cover all software-enabled devices and to cover the right to modify or customize those products, not just repair them. As more and more products have computerized components, TPM-encumbered software runs on more of the devices we use daily—from toys to microwaves—putting you at risk of violating the statute if you access the code of something you own to lawfully customize it.How You Can Help
To help make our case for this new exemption, we want to hear from you about your experiences with anything that might have a software component with a TPM that prevents you from making full use of the products you already own. From the Internet of Things and medical devices, to Smart TVs and game consoles, to appliances and computer peripherals, to any other items you can think of—that’s what we want to hear about! As an owner, you should have the right to repair, modify, and diagnose the products you rely on by being able to access all of the software code contained in the products. These may include products you may not necessarily associate with software, such as insulin pumps and smart products, like toys and refrigerators.
If you have a story about how:
- someone in the United States;
- attempted or planned to repair, modify, or diagnose a product with a software component; and
- encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)
—we want to hear from you! Please email us at RightToModemail@example.com with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:
- The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
- What you wanted to do and why.
- How a TPM interfered with your project, including a description of the TPM.
- What did the TPM restrict access to?
- What did the TPM block you from doing? How?
- If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
- Optional: Links to relevant articles, blog posts, etc.
- Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.
This is a team effort. In seeking repair and tinkering exemptions over the years, we’ve used some great stories from you about your repair problems and projects involving your cars and other devices. In past cycles, your stories helped the Copyright Office understand the human impact of this law. Help us fight for your rights once more!
The International Trade Commission, or ITC, is a federal agency in Washington D.C. that investigates unfair trade practices. Unfortunately, in recent years, it has also become a magnet for some of the worst abusers of the U.S. patent system. Now, there’s a bill in Congress, the Protecting America’s Interest Act (H.R. 8037), that could finally get patent trolls out of the ITC—a place they never should have been allowed in the first place.
Patent owners can ask the Commission to investigate an allegation of infringement, in addition to their right to bring a patent infringement case into federal court. The ITC can’t award damages like a district court can, but the ITC can grant an “exclusion order,” which bans importation of the excluded item, and orders customs agents to seize products at the border.
Not everyone is entitled to ask the ITC for an exclusion order; the complainant must be part of a “domestic industry.” But over the years, administrative law judges at the ITC have lowered the bar for what that requires. Engaging in patent licensing and litigation—without more—is enough to establish a “domestic industry” in this country, in the eyes of ITC judges. That means patent trolls, which produce no goods or services, can qualify. Even patent assertion entities based in other countries are using the ITC. For example, last year, the ITC took up a case in which an Ireland-based patent troll sought to stop 80% of U.S. imports on Android tablets, 86% of Windows tablets, and more than 50% of Android smartphones.
The ITC was never intended to be a forum for domestic companies based on allegations of patent infringement. But that’s exactly what it has become. Since 2006, the ITC’s own data show that between 6 and 33 percent of the venue’s patent cases are brought by entities that don’t practice the patent, which are, more often than not, patent trolls. The fast pace of ITC litigation, which has an 18-month time limit, makes it more expensive and less fair to defendants than district court litigation. When U.S. companies are forced to pay huge legal fees, and big settlements, to avoid threatening their product lines, prices go up and consumers are the losers.
Our patent system is broken, and in need of reform. Getting patent trolls out of the ITC is one important step toward that goal—and one we can achieve. The Advancing America’s Interests Act, H.R. 8037, would require complainants at the ITC to show that they are part of a real domestic industry—one that employs labor or capital, or invests in plants, equipment, or research and development. “Licensing” would meet the requirement for a domestic industry only if it leads to “adoption and development of articles” that embody the patented work.
H.R. 8037, sponsored by Representatives Suzan DelBene (D-WA) and David Schweikert (R-AZ), would also compel the commission to put the public interest front and center when it considers issuing an “exclusion order.” Currently, the law only requires that the ITC consider the public interest as one of four separate factors. If H.R. 8037 were to pass, the ITC would have to affirmatively decide that each exclusion order is in the public interest.
While Congress may not have time to take this bill up before the election, we hope ITC reform is on the table in 2021. We supported a nearly-identical bill in 2016. Since then, patent abusers continue to be regulars at the ITC. It’s time to bid them farewell.
The antitrust lawsuit against Google filed by the Department of Justice (DOJ) and eleven state attorneys general has the potential to be the most important competition case against a technology company since the DOJ’s 1998 suit against Microsoft. The complaint is broad, covering Google’s power over search generally, along with search advertising. Instead of asking for money damages, the complaint asks for Google to be restructured and its illegal behavior restricted.
This suit flows from investigations by the DOJ Antitrust Division that have been going on since last year. Although a large, bipartisan group of state attorneys general were reportedly working together on the investigation, just eleven states, all with Republican attorneys general, joined the suit. A group of Democratic-led states are reportedly preparing a separate lawsuit.
The DOJ and states raised three claims in their suit, all under Section 2 of the Sherman Antitrust Act, which prohibits acquiring or maintaining monopoly power through improper means. The lawsuit alleges that Google illegally maintains monopoly power in three markets: “general search services, search advertising, and general search text advertising.” In these markets, says the complaint, “Google aggressively uses its monopoly positions, and the money that flows from them, to continuously foreclose rivals and protect its monopolies.”
Closely following the playbook of the US v. Microsoft antitrust suit filed in 1998, the suit against Google focuses on Google’s contracts with hardware vendors like Apple and Samsung, browser makers like Mozilla, Opera, and (again) Apple, and other technology companies whose products integrate with Google search or use Google’s Android operating system. These contracts, say the complaint, generally require technology companies to make Google the default search engine. In the case of devices that run Android, Google’s contracts also require vendors to include a bundle of Google apps (Gmail, Maps, YouTube). And vendors must agree not to modify Android in any significant way, even though the operating system is released under an open source license.
Implicit in the DOJ’s complaint is the notion that Google uses its control over multiple products and services (including search, email, video, and operating systems) to maintain its monopoly over search. Because of Google’s contracts, competitors in search like DuckDuckGo will struggle to offer an alternative to Google search unless it can also offer a mobile operating system, an email client, and so on.
Over the past several years, as the public and policymakers began looking to antitrust to rein in the biggest tech companies, there’s been plenty of concern about the difficulty of applying competition law to products that are free to the consumer. Over the years, U.S. regulators applying antitrust law have taken on a narrow focus on consumer prices, often finding that monopolistic behavior is OK as long as prices don’t rise. The DOJ’s complaint does a good job of getting around this hurdle while still focusing on harm to consumers. It alleges that as a monopolist in the “general search” market, Google “would be able to maintain quality below the level that would prevail in a competitive market.” Yes, consumers can get search for free, but they could get an even better free service if not for Google’s market power.
The complaint also employs a strategy that EFF has long advocated: treating consumer privacy as an aspect of product quality, such that overcollection and misuse of customers’ personal information by a monopolist can be evidence of harm to consumer welfare, the touchstone of modern antitrust law. “Google’s conduct,” says the complaint, “has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation.”
On the innovation angle, the complaint alleges that Google’s contracts deny potential search competitors the benefits of scale, which “affects a general search engine’s ability to deliver a quality search experience.” In other words, we would see more innovation in search engines if other entrepreneurs were able to compete with Google at scale.
Banking on The Power of Defaults
EFF and many other watchdog organizations have long argued that money damages don't do enough to change Big Tech practices. Any award of damages large enough to affect the future behavior of a company the size of Google (and its parent, Alphabet) would have to be far larger than even the largest antitrust damages ever awarded by a court. Otherwise, Google and its Big Tech brethren treat fines and damages as a necessary cost of doing business.
The DOJ’s suit takes a different approach. Its complaint asks for “structural relief,” which means a breakup or restructuring of the company, and court orders to stop the anticompetitive practices described in the complaint, but no money damages. If the government can prove that Google violated the antitrust laws, choosing to forego money damages should help the court focus on crafting effective structural and conduct remedies.
Many of the claims in the suit are built on the power of default settings: users are likely to stick with whatever search engine their devices are preconfigured to use, even if they can choose a different search engine. Google’s public messaging has challenged this principle for many years, arguing that “competition is only a click away.”
In their first public response to the DOJ lawsuit, Google argues that “our competitors are readily available too, if you want to use them.” But according to the complaint, research shows that people stick with the defaults, especially on mobile devices.
We’ve been here before. Microsoft raised a similar defense 20 years ago, arguing that its efforts to drive Windows users to its Internet Explorer browser were not illegal because users could install another browser at any time. The Court of Appeals for the DC Circuit treated Microsoft’s argument with skepticism, and its decision implied that default settings can be anticompetitive and form part of an antitrust violation.
Still, most antitrust cases are decided on the “rule of reason,” meaning a balancing of pro-competitive and anti-competitive effects. We can expect to see heated arguments in this case about just how easy it is to use alternative search engines and what that means for the ability of competitors like DuckDuckGo to thrive.
Three things are glaringly absent from the complaint. First, despite the efforts of some politicians to paint the suit as Google's comeuppance for allegedly censoring conservative viewpoints, the complaint says nothing about ideological bias or censorship. That’s not surprising to antitrust attorneys, because the claims of bias simply don’t help to make an antitrust case. We hope those comments about the lawsuit don’t end up fueling a defense by Google that the suit is an attempt at partisan score-settling. This sort of accusation clouded the DOJ’s ultimately unsuccessful effort to stop the merger of AT&T and Time Warner.
Second, the complaint doesn’t include any claims of monopoly abuse in web advertising markets or mobile operating systems. And it doesn’t specifically mention Google’s history of acquiring companies that could become serious competitors, including companies whose technology now powers Google’s search advertising. The government can still argue that Google’s acquisitions were part of its illegal monopoly maintenance, but the complaint suggests it’s not a major part of the case so far.
Third, the government did not bring any claims under Section 1 of the Sherman Act, the well-known provision that bans any “contract, combination . . . or conspiracy in restraint of trade.” Since the suit focuses on Google’s contracts with hardware makers and others, we would expect the government to argue that those contracts were themselves illegal restraints, as well as contributing to a campaign of illegal monopoly maintenance. The absence of Section 1 claims in this suit is odd.
Big Tech companies have been under scrutiny for years for being too big and too willing to wield market power in ways that are abusive or exacerbate existing problems. Unfair labor practices, outsized control over others' speech, and locking out new products by enforcing incompatibility on a technical level are a few that come to mind. If this lawsuit is the best opportunity in a generation to hold Big Tech accountable, it may seem disappointing that none of these issues are addressed. But it shouldn’t be. Today’s antitrust law focuses narrowly on consumer welfare, often permitting monopoly abuses as long as they don’t lead to higher consumer prices in the short term.
The DOJ’s suit is an attempt to show that Google’s practices have harmed consumer welfare. If lawsuits are going to address other harms, Congress needs to act to return antitrust laws to their historic focus on preventing and ending monopolies that harm workers, markets, competitors, and innovation. As the lawsuit continues, we’ll be pressing Congress to update our antitrust laws to address more of the harms of Big Tech monopolies.
The pledge period for the Combined Federal Campaign (CFC) is underway and EFF needs your help! Last year, U.S. government employees raised over $19,000 for EFF through the CFC, helping us fight for privacy, free speech, and security on the Internet so that we can help create a better digital future.
The Combined Federal Campaign is the world's largest and most successful annual charity campaign for U.S. federal employees and retirees. Since its inception in 1961, the CFC fundraiser has raised more than $8.4 billion for local, national, and international charities. This year's campaign runs from September 21 to January 15, 2021. Be sure to make your pledge before the campaign ends!
U.S. government employees can give to EFF by going to GiveCFC.org and clicking DONATE to give via payroll deduction, credit/debit, or an e-check! Be sure to use our CFC ID #10437. You can also scan the QR code below!
Even though EFF is celebrating its 30th anniversary, we're still fighting hard to protect online privacy, free expression, and innovation. We've made significant steps towards Internet freedom, including leading the call to reject the EARN-IT bill, which threatens to break encryption and undermine free speech online; keeping track of how COVID-19 is affecting digital rights around the world; and we even mobilized over 800 nonprofits and 64,000 individuals to stop the sale of the entire .ORG registry to a private equity firm.
Government employees have a tremendous impact on the shape of our democracy and the future of civil liberties and human rights online. Become an EFF member today by using our CFC ID #10437 when you make a pledge!
EFF Files Amicus Brief Arguing That Law Enforcement Access to Wi-Fi Derived Location Data Violates the Fourth Amendment
With increasing frequency, law enforcement is using unconstitutional digital dragnet searches to attempt to identify unknown suspects in criminal cases. In Commonwealth v. Dunkins, currently pending before the Pennsylvania Supreme Court, EFF and the ACLU are challenging a new type of dragnet: law enforcement’s use of WiFi data to retrospectively track individuals’ precise physical location.
Phones, computers, and tablets connect to WiFi networks—and in turn, the Internet—through a physical access point. Since a single access point can only service a limited number of devices within a certain range, WiFi networks that have many users and cover larger geographic areas have multiple stationary access points. When a device owner moves through a WiFi network with multiple access points, their device seamlessly switches to the nearest available point. This means that an access point can serve as a proxy for a device owner’s physical location. As an access point records a unique identifier for each device that connects to it, along with the time the device connected, access point logs can reveal a device’s precise location over time.
In Dunkins, police were investigating a robbery that occurred in the middle of the night in a dorm at Moravian College in eastern Pennsylvania. To identify a suspect, police obtained logs of every device that connected to the 80-90 access points in the dorm—about one access point for every other dorm room—around the time of the robbery. From there, police identified devices belonging to several dozen students. They then narrowed their list to include only non-residents. That produced a list of three devices: two appeared to belong to women and one appeared to belong to a man who later turned out to be Dunkins. Since police believed the suspect was a man, they focused their investigation on that device. They then obtained records of Dunkins’ phone for five hours on the night of the robbery, showing each WiFi access point on campus that his phone connected to during that time. Dunkins was ultimately charged with the crime.
We argued in our brief that searches like this violate the Fourth Amendment. The WiFi log data can reveal sensitive location information, so it is essentially identical to the cell phone location records that the Supreme Court ruled in Carpenter require a warrant. Just like cell phone records, the WiFi logs offered the police the ability to retrospectively track a person’s movement, including inside constitutionally protected spaces like students’ dorm rooms. And just as the Carpenter court recognized that cell phones are essential for participation in modern life, accessing a college WiFi network is equally indispensable to college life.
Additionally, we argued that even if police had obtained a warrant, such a warrant would be invalid. The Fourth Amendment requires law enforcement to obtain a warrant based on probable cause before searching a particular target. But in this case, police only knew that a crime occurred—they did not have a suspect or even a target device identifier. Assessing virtually the same situation in the context of a geofence warrant, two federal judges recently ruled that the government’s application to obtain location records from a certain place during a specific time period failed to satisfy the Fourth Amendment’s particularity and probable cause requirements.
The police’s tactics in this case illustrate exactly why indiscriminate searches are a threat to a free society. In acquiring and analyzing the records from everyone in the dorm, the police not only violated the defendant’s rights but they also wrongly learned the location of every student who was in the dormitory in the middle of the night. In particular, police determined that two women wholly unconnected to the robbery were not in their own dorm rooms on the night of the crime. That’s exactly the type of dragnet surveillance that the Fourth Amendment defends against.
The outcome of this case could have far-reaching consequences. In Pennsylvania and across the nation, public WiFi networks are everywhere. And for poor people and people of color, free public WiFi is often a crucial lifeline. Those communities should not be at a greater risk of surveillance than people who have the means to set up their own private networks. We hope the court will realize what’s at stake here and rule that these types of warrantless searches are illegal.Related Cases: Carpenter v. United States
AT&T and Verizon secured arguably one of the biggest regulatory benefits from the Federal Communications Commission (FCC) with the agency ending the last remnants of telecom competition law. In return for this massive gift from the federal government, they will give the public absolutely nothing.
A Little Bit of Telecom History
When the Department of Justice successfully broke up the AT&T monopoly into regional companies, it needed Congress to pass a law to open up the regional companies (known as Incumbent Local Exchange Carriers or ILECs) to competition. To do that, the Congress passed the Telecommunications Act of 1996 that established bedrock competition law and reaffirmed non-discrimination policies that net neutrality is based on. The law Congress created a new industry that would interoperate with the ILECs. These companies were called Competitive Local Exchange Carriers (CLECs) and already existed locally at the time as many were selling early day Internet access via dialup over the AT&T telephone lines along with local phone services. As broadband came to market, CLECs used the copper wires of ILECs to sell competitive DSL services. In the early years the policy worked and competition sprung forth with 1,000s of new companies existing along with a massive new competition based investment in the telecom sector in general (see chart below).
Source: Data assembled by the California Public Utilities Commission in 2005
Congress intervened to create the competitive market through the FCC, but at the same time gave the FCC the keys (through a process called “forbearance”) to eliminate the regulatory interventions should competition take root on its own. The FCC started to apply forbearance just a short number of years after the passage of the ‘96 Act with arguably one of the most significant decisions in 2005 when fiber wires being deployed by ILECs did not have to be shared with CLECs like copper wires with the entry of Verizon FiOS. Many states followed course though with notable resistance because it was questionable whether future networks did not need strong competition policy to ensure the goals of affordability and universality could be met with market forces alone. One California Public Utilities Commissioner expressed concern that “within a few years there may not be any ZIP codes left in California with more than five or six providers.”
What Little Competition Remains Today
The ILECs today, essentially AT&T and Verizon, no longer deploy fiber broadband and have abandoned competition with cable companies to pursue wireless services. Without access to fiber, CLECs began to deploy their own fiber networks that were financed from their revenues they gained from copper DSL customers, even in rural markets. But for the last two years AT&T and Verizon have been trying to put a stop to that by asking the FCC to use its forbearance power to eliminate copper sharing as well, which they achieved today. Worst yet, AT&T is already positioning itself to abandon DSL copper lines rather than upgrade them to fiber in markets across the country leaving people with a cable monopoly or undependable cell phone service for Internet. In other words, the FCC will effectively make the digital divide worse for 100,000s of Americans by today’s decision.
Broadband competition has been on a long slow decline for well over a decade after the FCC’s 2005 decision. In the years that followed the industry rapidly consolidated with smaller companies being snuffed out or closing shop. Virtually every single prediction the FCC made about the market in 2005 have failed to pan out and the end result is a huge number of Americans now face regional monopolies as their need for high-speed access has grown dramatically during the pandemic. It is time to rethink the approach, but today we got a double down.
The FCC’s Decision is Not Final and a Future FCC Can Chart a New Course
The decline of competition didn’t happen exclusively at the hands of the current FCC, but the signs of the regional monopolization were obvious at the start of 2017 when the agency decided to abandon its authority over the industry and repeal net neutrality rules. Approving today’s AT&T/Verizon petition to end the 1996’s final remnants of competition policy, rather than improving and modernizing them to promote high-speed access competition, the FCC has decided that big ISPs know best. But we see what is going to happen next. AT&T will work hard to eliminate any small competitors on their copper lines because doing so impairs their ability to replace them with fiber. All that while they themselves will not provide the fiber replacement resulting in a worsening of the digital divide worse across the country. The right policy would make sure every American gets a fiber line rather than disconnected. None of this has to happen though and EFF will work hard in the states and in DC to bring back competition in broadband access.
A legacy of the 2016 U.S. election is the controversy about the role played by paid, targeted political ads, particularly ads that contain disinformation or misinformation. Political scientists and psychologists disagree about how these ads work, and what effect they have. It's a pressing political question, especially on the eve of another U.S. presidential race, and the urgency only rises abroad, where acts of horrific genocide have been traced to targeted social media disinformation campaigns.
The same factors that make targeted political ads tempting to bad actors and dirty tricksters are behind much of the controversy. Ad-targeting, by its very nature, is opaque. The roadside billboard bearing a politician's controversial slogan can be pointed at and debated by all. Targeted ads can show different messages to different users, making it possible for politicians to "say the quiet part out loud" without their most extreme messaging automatically coming to light. Without being able to see the ads, we can't properly debate their effect.
Enter Ad Observatory, a project of the NYU Online Transparency Project, at the university's engineering school. Ad Observatory recruits Facebook users to shed light on political (and other) advertising by running a browser plugin that "scrapes" (makes a copy of) the ads they see when using Facebook. These ads are collected by the university and analyzed by the project's academic researchers; they also make these ads available for third party scrutiny. The project has been a keystone of many important studies and the work of accountability journalists.
With the election only days away, the work of the Ad Observatory is especially urgent. Facebook publishes its own “Ad Library," but the NYU researchers explain that the company’s data set is “complicated to use, untold numbers of political ads are missing, and a significant element is lacking: how advertisers choose which specific demographics and groups of people should see their ad—and who shouldn't. They have cataloged many instances in which Facebook had failed to live up to its promises to clearly label ads and fight disinformation.
But rather than embrace the Ad Observatory as a partner that rectifies the limitations of its own systems, Facebook has sent a legal threat to the university, demanding that the project shut down and delete the data it has already collected. Facebook’s position is that collecting data using "automated means" (including scraping) is a violation of its Terms of Service, even when the NYU Ad Observatory is acting on behalf of Facebook's own users, and even in furtherance of the urgent mission of fighting political disinformation during an election that U.S. politicians call the most consequential and contested in living memory.
Facebook’s threats are especially chilling because of its history of enforcing its terms of service using the Computer Fraud and Abuse Act (CFAA). The CFAA makes it a federal crime to access a computer connected to the Internet “without authorization," but it fails to define these terms. It was passed with the aim of outlawing computer break-ins, but some jurisdictions have converted it into a tool to enforce private companies’ computer use policies, like terms of service, which are typically wordy, one-sided contracts that virtually no one reads.
In fact, Facebook is largely responsible for creating terrible legal precedent on scraping and the CFAA in a 2016 Ninth Circuit Court of Appeals decision called Facebook v. Power Ventures. The case involved a dispute between Facebook and a social media aggregator, which Facebook users had voluntarily signed up for. Facebook did not want its users engaging with this service, so it sent Power Ventures a cease and desist letter alleging a violation of its terms of service and tried to block Power Ventures’ IP address. Even though the Ninth Circuit had previously decided that a violation of terms of service alone was not a CFAA violation, the court found that Power Ventures did violate the CFAA when it continued to provide its services after receiving the cease and desist letter. So the Power Ventures decision allows platforms to not only police their platforms against any terms of service violations they deem objectionable, but to turn even minor transgressions against a one-sided contract of adhesion into a violation of federal law that carries potentially serious civil and criminal liability.
More recently, the Ninth Circuit limited the scope of Power Ventures somewhat in HiQ v. LinkedIn. The court clarified that scraping public websites cannot be a CFAA violation regardless of personalized cease and desist letters sent to scrapers. However, that still leaves any material you have to log in to see—like most posts on Facebook—off limits to scraping if the platform decides it doesn’t like the scraper.
Decisions like Power Ventures potentially give Facebook a veto over a wide swath of beneficial outside research. That’s such a problem that some lawyers have argued interpreting the CFAA to criminalize terms of service violations would actually be unconstitutional. And at least one court has taken those concerns to heart, ruling in Sandvig v. Barr that the CFAA did not bar researchers who wanted to create multiple Facebook “tester" accounts to research how algorithms unlawfully discriminate based on characteristics like race or gender. The Sandvig decision should be a warning to Facebook that shutting down important civic research like the Ad Observatory is a serious misuse of the CFAA, which might even violate the First Amendment.
Over the weekend, Facebook executive Rob Leathern posted the official rationale for the multinational company's attack on a public university's researchers: he claimed that "Collecting personal data via scraping tools is an industry-wide problem that’s bad for people’s privacy & unsafe regardless of who is doing it. We protect people's privacy by not only prohibiting unauthorized scraping in our terms, we have teams dedicated to finding and preventing it. And under our agreement with the FTC, we report violations like these as privacy incidents...[W]e want to make sure that providing more transparency doesn't come at the cost of privacy."
Leathern is making a critical mistake here: he is conflating secrecy with privacy. Secrecy is when you (and possibly a few others) know something that everyone else does not get to know. Privacy is when you get to decide who knows what about you. As Facebook's excellent white paper on the subject explains: "What you share and who you share it with should be your decision."
Leathern's blanket condemnation of scraping is just as disturbing as his misunderstanding of privacy. Scraping is a critical piece of competitive compatibility, the process whereby new products and services are designed to work with existing ones without cooperation from the companies that made those services. Scraping is a powerful pro-competitive move that allows users and the companies that serve them to overturn the dominance of monopolists (that’s why it was key to forcing U.S. banks to adopt standards that let their customers manage their accounts in their own way). In the end, scraping is just an automated way of copying and pasting: the information that is extracted by the Ad Observer plugins is the same data that Mr Leathern’s users could manually copy and paste into the Ad Observatory databases.
As with so many technological questions, the ethics of scraping depend on much more than what the drafter of any terms of service thinks is in its own best interest.
Facebook is very wrong here.
First, they are wrong on the law. The Computer Fraud and Abuse Act should not not be on their side. Violating the company's terms of service to perform a constitutionally protected watchdog role is lawful.
Second, they are wrong on the ethics. There is no privacy benefit to users in prohibiting them from choosing to share the political ads they are served with researchers and journalists.
That's a lot of wrong. And worse, it's a lot of wrong on the eve of an historic election, a wrongness that will chill other projects contemplating their own accountability investigations into dominant tech platforms.
Copyright law is supposed to promote creativity, not stamp out criticism. Too often, copyright owners forget that – especially when they have a convenient takedown tool like the Digital Millennium Copyright Act (DMCA).
EFF is happy to remind them – as we did this month on behalf of Internet creator Lindsay Ellis. Ellis had posted a video about a copyright dispute between authors in a very particular fandom niche: the Omegaverse realm of wolf-kink erotica. The video tells the story of that dispute in gory and hilarious detail, while breaking down the legal issues and proceedings along the way. Techdirt called it “truly amazing.” We agree. But feel free to watch “Into the Omegaverse: How a Fanfic Trope Landed in Federal Court,” and decide for yourself.
The dispute described in the video began with a series of takedown notices to online platforms with highly dubious allegations of copyright infringement. According to these, one Omegaverse author, Zoey Ellis (no relation) had infringed the copyright of another, Addison Cain, by copying common thematic aspects of characters in the Omegaverse genre, i.e., tropes. As Ellis’ video explains, these themes not only predate Cain’s works, but are uncopyrightable as a matter of law. Further litigation ensued, and Ellis’ video explains what happened and the opinions she formed based on the publicly available records of those proceedings. Some of those opinions are scathingly critical of Ms. Cain. But the First Amendment protects scathing criticism. So does copyright law: criticism and parody are classic examples of fair use that are authorized by law. Still, as we have written many times, DMCA abuse targeting such fair uses remains a pervasive and persistent problem.
Nevertheless, it didn’t take long for Cain to send (through counsel) outlandish allegations of copyright infringement and defamation. Soon after, Patreon and YouTube received DMCA notices from email addresses associated with Cain that raised the same allegations.
That’s when EFF stepped in. The video is a classic fair use. It uses a relatively small amount of a copyrighted work for purposes of criticism and parody in an hour-long video that consists overwhelmingly of Ellis’ original content. In short, the copyright claims were deficient as a matter of law.
The defamation claims were also deficient, but their presence alone was cause for concern: defamation claims have no place in a DMCA notice. If you want defamatory content taken down, you should seek a court order and satisfy the First Amendment’s rigorous requirements. Platform providers should be extremely skeptical of DMCA notices that include such claims and examine them carefully.
We explained these points in a letter to Cain’s counsel. We hoped the reminder would be well-taken. We were wrong. In response, Cain’s counsel accused EFF of colluding with the Organization for Transformative Works to undermine her client and demanded apologies from both EFF and Ellis.
As we explain in today’s response, that’s not going to happen. EFF has fought for years to protect the rights of content creators like Lindsay Ellis and we will not apologize for our commitment to this work. Nor will Ellis apologize for exercising her right to speak critically about public figures and their work. It’s past time to put an end to this entire matter.
With the upcoming U.S. elections, major U.S.-based platforms have stepped up their content moderation practices, likely hoping to avoid the blame heaped upon them after the 2016 election, where many held them responsible for siloing users into ideological bubbles—and, in Facebook’s case, the Cambridge Analytica imbroglio. It’s not clear that social media played a more significant role than many other factors, including traditional media. But the techlash is real enough.
So we can’t blame them for trying, nor can we blame users for asking them to. Online disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events.
But it is equally true that content moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.
We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.
Back in the U.S., Twitter recently blocked a New York Post article about presidential candidate Joe Biden’s son on the grounds that it was based on hacked materials, and then lifted the block two days later. After placing limits on political advertising in early September, Facebook promised not to change its policies further ahead of the elections. Three weeks later it announced changes to its political advertising policies and then blocked a range of expression it previously permitted. In both cases, users—especially users who care about their ability share and access political information—are left to wonder what might be blocked next.
Given the ever-changing moderation landscape, it’s hard to keep up. But there are some questions users and platforms can ask about every new iteration, whether or not an election is looming. Not coincidentally, many of these overlap with the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of practices created by EFF and a small group of organizations and advocates, that social media platforms should undertake to provide transparency about why and how often they take down users’ posts, photos, videos and other content.
Is the Approach Narrowly Tailored or a Categorical Ban?
Outright censorship should not be the only answer to disinformation online. When tech companies ban an entire category of content, they have a history of overcorrecting and censoring accurate, useful speech—or, even worse, reinforcing misinformation. Any restrictions on speech should be both necessary and proportionate.
Moreover, online platforms have other ways to address the rapid spread of disinformation. For example, flagging or fact-checking content that may be of concern carries its own problems–again, it means someone—or some machine—has decided what does and does not require further review, and who is and is not an accurate fact-checker. Nonetheless, this approach has the benefit of leaving speech available for those who wish to receive it.
When a company does adopt a categorical ban, we should ask: Can the company explain what makes that category exceptional? Are the rules to define its boundaries clear and predictable, and are they backed up by consistent data? Under what conditions will other speech that challenges established consensus be removed? Who decides what does or does not qualify as “misleading” or “inaccurate”? Who is tasked with testing and validating the potential bias of those decisions?
Does It Empower Users?
Platforms must address one of the root causes behind disinformation’s spread online: the algorithms that decide what content users see and when. And they should start by empowering users with more individualized tools that let them understand and control the information they see.
Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.
Users shouldn’t be held hostage to a platform’s proprietary algorithm. Instead of serving everyone “one algorithm to rule them all” and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and share their preferences with their communities.
In addition, platforms should examine the parts of their infrastructure that are acting as a megaphone for dangerous content and address that root cause of the problem rather than censoring users.
During an election season, the mistaken deletion of accurate information and commentary can have outsize consequences. Absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. Smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That’s fine, as long as Internet users have a range of meaningful options with which to engage.
Is It Transparent?
The most important parts of the puzzle here are transparency and openness. Transparency about how a platform’s algorithms work, and tools to allow users to open up and create their own feeds, are critical for wider understanding of algorithmic curation, the kind of content it can incentivize, and the consequences it can have.
In other words, actual transparency should allow outsiders to see and understand what actions are performed, and why. Meaningful transparency inherently implies openness and accountability, and cannot be satisfied by simply counting takedowns. That is to say that there is a difference between corporately sanctioned ‘transparency,’ which is inherently limited, and meaningful transparency that empowers users to understand Facebook’s actions and hold the company accountable.
Is the Policy Consistent With Human Rights Principles?
Companies should align their policies with human rights norms. In a paper published last year, David Kaye—the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression—recommends that companies adopt policies that allow users to “develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.” We agree, and we’re joined in that opinion by a growing international coalition of civil liberties and human rights organizations.
Content Moderation Is No Silver Bullet
We shouldn’t look to content moderators to fix problems that properly lie with flaws in the electoral system. You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions – which is one of many reasons none of these policy choices should be legal requirements. It inevitably involves difficult line-drawing and will be riddled with both mistakes and a ton of decisions that many users will disagree with. However, there are clear opportunities to make improvements and it is far past time for platforms to put these into practice.
One bad privacy idea that won’t die is the so-called “data dividend,” which imagines a world where companies have to pay you in order to use your data.
Sound too good to be true? It is.
Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.
EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.Think You’re Sticking It to Big Tech? Think Again
Supporters of data dividends correctly recognize one thing: when it comes to privacy in the United States, the companies that collect information currently hold far more power than the individual consumers continually tapped for that information.
But data dividends do not meaningfully correct that imbalance. Here are three questions to help consider the likely outcomes of a data dividend policy:
- Who will determine how much you get paid to trade away your privacy?
- What makes your data valuable to companies?
- What does the average person gain from a data dividend, and what do they lose?
Data dividend plans are thin on details in regarding who will set the value of data. Logically, however, companies have the most information about the value they can extract from our data. They also have a vested interest in using the lowest possible bar to set that value. Legislation in Oregon to value health data would have allowed companies to set that value, leaving little chance that consumers would get anywhere near a fair shake. Even if a third-party, such as a government panel, were tasked with setting a value, the companies would still be the primary sources of information about how they plan to monetize data.
Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.
Which brings us to a second question: why and in what ways do companies value data? Data is the lifeblood of many industries. Some of that data is organized by consumer and then used to deliver targeted ads. But it’s also highly valuable to companies in the aggregate—not necessarily on an individual basis. That’s one reason why data collection can often be so voracious. A principal point of collecting data is to identify trends—to sell ads, to predict behavior, etc.— and it’s hard to do that without getting a lot of information. Thus, any valuation that focuses solely on individualized data, to the exclusion of aggregate data, will be woefully inadequate. This is another reason why individuals aren’t well-positioned to advocate for good prices for themselves.
Even for companies that make a lot of money, the average revenue per user may be quite small. For example, Facebook earned some $69 billion in revenue in 2019. For the year, it averaged about $7 revenue per user, globally, per quarter. Let’s say that again: Facebook is a massive, global company with billions of users, but each user only offers Facebook a modest amount in revenue. Profit per user will be much smaller, so there is no possibility that legislation will require companies to make payouts on a revenue-per-customer basis. As a result, the likely outcome of a data dividend law (even as applied to an extremely profitable company like Facebook) would be that each user receives, in exchange for their personal information over the course of an entire year, a very small piece of the pie—perhaps just a few dollars.
Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.
All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.Whatever the Payout, The Cost Is Too High
And what do we lose by agreeing to a data dividend? We stand to lose a lot. Data dividends will likely be most attractive to those for whom even a small bit of extra money would do a lot. Those vulnerable people—low-income Americans and often communities of color—should not be incentivized to pour more data into a system that already exploits them and uses data to discriminate against them. Privacy is a human right, not a commodity. A system of data dividends would contribute to a society of privacy “haves” and “have-nots.”
Also, as we’ve said before, a specific piece of information can be priceless to a particular person and yet command a very low market price. Public information feeds a lot of the data ecosystem. But even non-public data, such as your location data, may cost a company less than a penny to buy—and cost you your physical safety if it falls into the wrong hands. Likewise, companies currently sell lists of 1,000 people with conditions such as anorexia, depression, and erectile dysfunction for $79 per list—or eight cents per listed person. Such information in the wrong hands could cause great harm.
There is no simple way to set a value for data. If someone asked how much they should pay you to identify where you went to high school, you’d probably give that up for free. But if a mortgage company uses that same data to infer that you’re in a population that is less likely to repay a mortgage—as a Berkeley study found was true for Black and Latinx applicants—it could cost you the chance to buy a home.
Those who follow our work know that EFF also opposes “pay-for-privacy” schemes, referring to offers from a company to give you a discount on a good or service in exchange for letting them collect your information.
In a recent example of this, AT&T said it will introduce mobile plans that knock between $5 and $10 off people’s phone bills if they agree to watch more targeted ads on their phone. "I believe there's a segment of our customer base where, given a choice, they would take some load of advertising for a $5 or $10 reduction in their mobile bill," AT&T Chief Executive Officer John Stankey said to Reuters in September.
Again, there are people for whom $5 or $10 per month would go a long way to make ends meet. That also means, functionally, that similar plans would prey on those who can’t afford to protect themselves. We should be enacting privacy policies that protect everyone, not exploitative schemes that treat lower-income people as second-class citizens.
Pay-for-privacy and data dividends are two sides of the same coin. Some data dividend proponents, such as former presidential candidate Andrew Yang, draw a direct line between the two. Once you recognize that data have some set monetary value, as schemes such as AT&T’s do, it paves the way for data dividends. EFF opposes both of these ideas, as both would lead to an exchange of data that would endanger people and commodify privacy.
Advocacy of a data dividend—or pay-for-privacy— as the solution to our privacy woes admits defeat. It yields to the incorrect notion that privacy is dead, and is worth no more than a coin flipped your way by someone who holds all the cards.
It undermines privacy to encourage people to accept the scraps of an exploitative system. This further lines the pockets of those who already exploit our data, and exacerbates unfair treatment of people who can’t afford to pay for their basic rights.
There is no reason to concede defeat to these schemes. Privacy is not dead—theoretically or practically, despite what people who profit from abusing your privacy want you to think. As Dipayan Ghosh has said, privacy nihilists ignore a key part of the data economy, “[your] behavioral data are temporally sensitive.” Much of your information has an expiration date, and companies that rely on it will always want to come back to the well for more of it. As the source, consumers should have more of the control.
That’s why we need to change the system and redress the imbalance. Consumers should have real control over their information, and ways to stand up and advocate for themselves. EFF’s top priorities for privacy laws include granting every person the right to sue companies for violating their privacy, and prohibiting discrimination against those who exercise their rights.
It’s also why we advocate strongly for laws that make privacy the default—requiring companies to get your opt-in consent before using your information, and to minimize how they process your data to what they need to serve your needs. That places meaningful power with the consumer — and gives you the choice to say “no.” Allowing a company to pay you for your data may sound appealing in theory. In practice, unlike in meaningful privacy regimes, it would strip you of choice, hand all your data to the companies, and give you pennies in return.
Data dividends run down the wrong path to exercising control, and would dig us deeper into a system that reduces our privacy to just another cost of doing business. Privacy should not be a luxury. It should not be a bargaining chip. It should never have a price tag.
At the end of September, multiple press outlets published leaked set of antimonopoly enforcement proposals proposed for the a new EU Digital Market Act , which EU officials say they will finalize this year.
The proposals confront the stark fact that the Internet has been thoroughly dominated by a handful of giant, U.S.-based firms, which compete on a global stage with a few giant Chinese counterparts and a handful of companies from Russia and elsewhere. The early promise of a vibrant, dynamic Internet, where giants were routinely toppled by upstarts helmed by outsiders seems to have died, strangled by a monopolistic moment in which the Internet has decayed into "a group of five websites, each consisting of screenshots of text from the other four."
Anti-Monopoly Laws Have Been Under-Enforced The tech sector is not exceptional in this regard: from professional wrestling to eyeglasses to movies to beer to beef and poultry, global markets have collapsed into oligarchies, with each sector dominated by a handful of companies (or just one).
Fatalistic explanations for the unchecked rise of today's monopolized markets—things like network effects and first-mover advantage—are not the whole story. If these factors completely accounted for tech's concentration, then how do we explain wrestling's concentration? Does professional wrestling enjoy network effects too?
A simpler, more parsimonious explanation for the rise of monopolies across the whole economy can be found in the enforcement of anti-monopoly law, or rather, the lack thereof, especially in the U.S. For about forty years, the U.S. and many other governments have embraced a Reagan-era theory of anti-monopoly called "the consumer welfare standard." This ideology, associated with Chicago School economic theorists, counsels governments to permit monopolistic behavior – mergers between large companies, "predatory acquisitions" of small companies that could pose future threats, and the creation of vertically integrated companies that control large parts of their supply chain – so long as there is no proof that this will lead to price-rises in the immediate aftermath of these actions.
For four decades, successive U.S. administrations from both parties, and many of their liberal and conservative counterparts around the world, have embraced this ideology and have sat by as firms have grown not by selling more products than their competitors, or by making better products than their competitors, but rather by ceasing to compete altogether by merging with one another to create a "kill zone" of products and services that no one can compete with.
After generations in ascendancy, the consumer welfare doctrine is finally facing a serious challenge, and not a moment too soon. In the U.S., both houses of Congressheld sweeping hearings on tech companies anticompetitive conduct, and the House's bold report on its lengthy, deep investigation into tech monopolism signaled a political establishment ready to go beyond consumer welfare and return to a more muscular, pre-Reagan form of competition enforcement anchored in the idea that monopolies are bad for society, and that we should prevent them because they hurt workers and consumers, and because they distort politics and smother innovation -- and not merely because they sometimes make prices go up.
A New Set of Anti-Monopoly Tools for the European Union These new EU leaks are part of this trend, and in them, we find a made-in-Europe suite of antimonopoly enforcement proposals that are, by and large, very welcome indeed. The EU defines a new, highly regulated sub-industry within tech called a "gatekeeper platform" -- a platform that exercises "market power" within its niche (the precise definition of this term is hotly contested). For these gatekeepers, the EU proposes a long list of prohibitions:
- A ban on platforms' use of customer transaction data unless that data is also made available to the companies on the platform (so Amazon would have to share the bookselling data it uses in its own publishing efforts with the publishers that sell through its platform, or stop using that data altogether)
- Platforms will have to obtain users' consent before combining data about their use of the platform with other data from third parties
- A ban on "preferential ranking" of platforms' own offerings in their search results: if you search for an address, Google will have to show you the best map preview for that address, even if that's not Google Maps
- Platforms like iOS and Android can't just pre-load their devices exclusively with their own apps, nor could Google they require Android manufacturers to preinstall Google's preferred apps, and not other apps, on Android devices
- A ban on devices that use "technical measures" (that's what lawyers call DRM -- any technology that stops you from doing what you want with your stuff) to prevent you from removing pre-installed apps.
- A ban on contracts that force businesses to offer their wares everywhere on the same terms as the platform demands -- for example, if platforms require monthly subscriptions, a business could offer the same product for a one-time payment somewhere else.
- A ban on contracts that punish businesses on platforms from telling their customers about ways to use their products without using the platform (so a mobile game could inform you that you can buy cheaper power-ups if you use the company's website instead of the app)
- A ban on systems that don't let you install unapproved apps (AKA "side-loading")
- A ban on gag-clauses in contracts that prohibit companies for complaining about the way the platform runs its business
- A ban on requiring that you use a specific email provider to use a platform (think of the way that Android requires a Gmail address)
- A requirement that users be able to opt out of signing into services operated by the platform they're using -- so you could sign into YouTube without being signed into Gmail
On top of those rules, there's a bunch of "compliance" systems to make sure they're not being broken:
- Ad platforms will have to submit to annual audits that will help advertisers understand who saw their ads and in what context
- Ad platforms will have to submit to annual audits disclosing their "cross-service tracking" of users and explaining how this complies with the GDPR, the EU’s privacy rules
- Gatekeepers will have to produce documents on demand from regulators to demonstrate their compliance with rules
- Gatekeepers will have to notify regulators of any planned mergers, acquisitions or partnerships
- Gatekeepers will have to pay employees to act as compliance officers, watchdogging their internal operations
In addition to all this, the leak reveals a "greylist" of activities that regulators will intervene to stop:
- Any actions that prevents sellers on a platform from acquiring "essential information" that the platform collects on their customers
- Collecting more data than is needed to operate a platform
- Preventing sellers on a platform from using the data that the platform collects on their customers
- Anything that creates barriers preventing businesses on a platform or their customers from migrating to a rival's platform
- Keeping an ad platform's click and search data secret -- platforms will have to sell this data on a "fair, reasonable and non-discriminatory" basis
- Any steps that stop users from accessing a rival's products or services on a platform
- App store policies that ban third-party sellers from replicating an operating system vendor's own apps
- Locking users into a platform's own identity service
- Platforms that degrade quality of service for competitors using the platform
- Locking platform sellers into using the platform's payment-processor, delivery service or insurance
- Platforms that offer discounts on their services to some businesses but not others
- Platforms that block interoperability for delivery, payment and analytics
- Platforms that degrade connections to rivals' services
- Platforms that "mislead" users into switching from a third-party's services to the platform's own
- Platforms that practice "tying" – forcing users to access unrelated third-party apps or services (think of an operating system vendor that requires you to get a subscription to a partner's antivirus tools).
One worrying omission from this list: interoperability rules for dominant companies. The walled gardens with which dominant platforms imprison their users are a serious barrier to new competitors. Forcing them to install gateways – ways for users of new services to communicate with the friends and services they left behind when they switched will go a long way to reducing the power of the dominant companies. That is a more durable remedy than passing rules to force those dominant actors to use their power wisely.
That said, there's plenty to like about these proposals, but the devil is in the details.
In particular, we're concerned that all the rules in the world do no good if they are not enforced. Determining whether a company has "degraded service" to a rival is hard to determine from the outside -- can we be certain that service problems are a deliberate act of sabotage? What about companies' claims that these are just normal technical issues arising from providing service to a third party whose servers and network connections are out of its control?
Harder still is telling whether a search-result unduly preferences a platform's products over rivals: the platforms will say (they do say) that they link to their own services ahead of others because they rank their results by quality and their weather reports, stores, maps, or videos are simply better than everyone else's. Creating an objective metric of the "right" way to present search results is certain to be contentious, even among people of goodwill who agree that the platform's own services aren't best.
What to do then? Well, as economists like to say, "incentives matter." Companies preference their own offerings in search, retail, pre-loading, and tying because they have those offerings. A platform that competes with its customers has an incentive to cheat on any rules of conduct in order to preference its products over the competing products offered by third parties.
Traditional antimonopoly law recognized this obvious economic truth, and responded to it with a policy called "structural separation": this was an industry-by-industry ban on certain kinds of vertical integration. For example, rail companies were banned from operating freight companies that competed with the freighters who used the rails; banks were banned from owning businesses that competed with the businesses they loaned money to. The theory of structural separation is that in some cases, dominant companies simply can't be trusted not to cheat on behalf of their subsidiaries, and catching them cheating is really hard, so we just remove the temptation by banning them from operating subsidiaries that benefit from cheating.
A structural separation regime for tech -- say, one that prevented store-owners from competing with the businesses that sold things in their store, or one that prevented search companies from running ad-companies that would incentivize them to distort their search results -- would take the pressure off of many of the EU's most urgent (and hardest-to-enforce) rules. Not only would companies who broke those rules fail to profit by doing so, but detecting their cheating would be a lot easier.
Imposing structural separation is not an easy task. Given the degree of vertical integration in the tech sector today, structural separation would mean unwinding hundreds of mergers, spinning off independent companies, or requiring independent management and control of subsidiaries. The companies will fight this tooth-and-nail.
But despite this, there is political will for separation. The Dutch and French governments have both signaled their displeasure with the leaked proposal, insisting that it doesn't go far enough, signing a (non-public) position paper that calls for structural separation, with breakups "on the table."
Whatever happens with these proposals, the direction of travel is clear. Monopolies are once again being recognized as a problem in and of themselves, regardless of their impact on short-term prices. It's a welcome, long overdue change.
After the world went into lockdown for COVID-19, Makers were suddenly confined to their workshops. Rather than idly wait it out, many of them decided to put their tools and skills to use, developing low-cost, rapid production methods for much-needed PPE and DIY ventilators in an effort to address the worldwide shortage.
It might sound outlandish to think that hobbyists and weekend warriors would be able to design and build devices that contribute to bending the curve of the pandemic, but there’s a rich history of similar work. The “iron lung,” the first modern negative-pressure ventilator, began as a side project of Harvard engineer Philip Drinker. It was powered by an electric motor and air pumps from vacuum cleaners. By 1928, Philip Drinker and Louis Shaw had finished designs, and production of the “Drinker respirator” began, saving lives during the Polio epidemic.
In the 1930s, John Emerson, a high-school drop-out and self-taught inventor, improved on the design of Drinker’s iron lung, releasing a model that was quieter, lighter, more efficient, and half the price of the Drinker respirator. Drinker and Harvard eventually sued Emerson, claiming patent infringement. After defending against these claims, Emerson went on to become a key manufacturer of these life-saving devices, a development that was applauded by healthcare providers of the time.
It’s not enough to just have the tools and know-how; you also need insight and context. Emerson’s machine shop was located in Harvard Square, where he built research devices for the local Boston medical schools. Without a doubt, that high-bandwidth access to researchers and users of his devices assisted in his innovations. In order to develop or improve existing technology, the modern Maker community needs access to the same sort of information.
Open access to research is critical to the process of developing new things. Making the methods and results of research freely available to all preserves the ability to fruitfully investigate and improve upon existing methods and devices. The first step in fixing or improving a system is understanding how it works, and what the mechanisms at play are. Impediments like paywalls or subscriptions decrease the likelihood that research is shared, and results in a severe handicap to the innovation process. In his book Democratizing Innovation, MIT professor Eric von Hippel makes the case that if “innovations are not diffused, multiple users with very similar needs will have to invest to (re)develop very similar innovations, which would be a poor use of resources from the social welfare point of view.” If the purpose of academic research is to push the boundaries of human knowledge, there is no justifiable case to be made for restricting access to that knowledge.
From its earliest days on the Internet, the Maker community has embraced the culture of information sharing. Through project documentation, YouTube videos, free 3D printer STL files, Makers share their methods and innovations freely and openly, enriching the community with each new project. As a result, millions of people have been able to learn new skills, develop new products, and become contributors to the open body of knowledge freely available online.
In 2010, I became frustrated that there was so much information locked in the “ivory tower” of academic research journals, and I wanted to do my part in liberating some of it. After reading a research paper from a UIUC materials science lab (which my university library thankfully had access to), I set out to replicate the results at our local hackerspace. After parsing the jargon, I deciphered their methods and was able to successfully make the conductive ink described in the paper. In keeping with the Maker ethos of sharing, I wrote a blog post describing how I did it using low-tech tools.
In 2020, there are now many Makers who are doing the same thing. YouTubers like Applied Science, The Thought Emporium, NileRed, and Breaking Taps routinely post videos on methods gleaned from research papers, sharing how they replicated the results, even filling in gaps in the papers with their own experiments, methods, successes, and failures.
These hobbyists aren’t just sharing information; they’re also providing a much-needed service to the academic community: replication. With countless academic papers being published every year, there’s a growing “replication crisis,” where many of the studies published have been impossible to reproduce. In 2016, a poll of 1,500 scientists revealed that 70% had failed to reproduce at least one other scientist's experiment, and 50% had even failed to reproduce one of their own experiments. Opening access to research allows Makers to participate in the process, addressing the need for replication.
This democratic, open access approach to the development, discovery, and distribution of research, enables both academics and non-academics alike to test, replicate, improve, and submit their findings in a highly transparent way. It allows all to participate in broadening the horizons of scientific research, regardless of whether they are enrolled at (or employed by) a university. Open access allows anyone who learns or discovers something new to share that information, even if they haven’t spent thousands of dollars and years of their life earning credentials.
Whether it’s for medical device innovation, materials science methods, or any other body of human knowledge, it’s time for open access research to be the default. The promise of the Internet is free and open access to information for and from all, and information gleaned from research should be no different. Making researchers (or Makers) pay to both publish and access research is an antiquated system that has no place on the modern Internet. It serves only to profit publishers and actively hinders innovation and critical research replication. It’s time for the academic community to shed this vestigial appendage and embrace the open access ethos that Makers have engendered online.
EFF is proud to celebrate Open Access Week.
EFF Files Comment Opposing the Department of Homeland Security's Massive Expansion of Biometric Surveillance
EFF, joined by several leading civil liberties and immigrant rights organizations, recently filed a comment calling on the Department of Homeland Security (DHS) to withdraw a proposed rule that would exponentially expand biometrics collection from both U.S. citizens and noncitizens who apply for immigration benefits and would allow DHS to mandate the collection of face data, iris scans, palm prints, voice prints, and DNA. DHS received more than 5,000 comments in response to the proposed rule, and five U.S. Senators also demanded that DHS abandon the proposal.
DHS’s biometrics database is already the second largest in the world. It contains biometrics from more than 260 million people. If DHS’s proposed rule takes effect, DHS estimates that it would nearly double the number of people added to that database each year, to over 6 million people. And, equally important, the rule would expand both the types of biometrics DHS collects and how DHS uses them.What the Rule Would Do
Currently, DHS requires applicants for certain, but not all, immigration benefits to submit fingerprints, photographs, or signatures. DHS’s proposed rule would change that regime in three significant ways.
First, the proposed rule would make mandatory biometrics submission the default for anyone who submits an application for an immigration benefit. In addition to adding millions of non-citizens, this change would sweep in hundreds of thousands of U.S. citizens and lawful permanent residents who file applications on behalf of family members each year. DHS also proposes to lift its restrictions on the collection of biometrics from children to allow the agency to mandate collection from children under the age of 14.
Second, the proposed rule would expand the types of biometrics DHS can collect from applicants. The rule would explicitly give DHS the authority to collect palm prints, photographs “including facial images specifically for facial recognition, as well as photographs of physical or anatomical features such as scars, skin marks, and tattoos,” voice prints, iris images, and DNA. In addition, by proposing a new and expansive definition of the term “biometrics,” DHS is laying the groundwork to collect behavioral biometrics, which can identify a person through the analysis of their movements, such as their gait or the way they type.
Third, the proposed rule would expand how DHS uses biometrics. The proposal states that a core goal of DHS’s expansion of biometrics collection would be to implement “enhanced and continuous vetting,” which would require immigrants “be subjected to continued and subsequent evaluation to ensure they continue to present no risk of causing harm subsequent to their entry.” This type of enhanced vetting was originally contemplated in Executive Order 13780, which also banned nationals of Iran, Libya, Somalia, Sudan, Syria, and Yemen from entering the United States. While DHS offers few details about what such a program would entail, it appears that DHS would collect biometric data as part of routine immigration applications in order to share that data with other law enforcement agencies and monitor individuals indefinitely.The Rule Is Fatally Flawed and Must Be Stopped
EFF and our partners oppose this proposed rule on multiple grounds. It fails to take into account the serious privacy and security risks of expanding biometrics collection; it threatens First Amendment activity; and it does not adequately address the risk of error in the technologies and databases that store biometric data. Lastly, DHS has failed to provide sufficient justification for these drastic changes, and the proposed changes exceed DHS’s statutory authority.Privacy and Security Threats
The breadth of the information DHS wants to collect is massive. DHS’s new definition of biometrics would allow for virtually unbounded biometrics collection in the future, creating untold threats to privacy and personal autonomy. This is especially true of behavioral biometrics, which can be collected without a person’s knowledge or consent, expose highly personal and sensitive information about a person beyond mere identity, and allow for tracking on a mass scale. Notably, both Democratic and Republican members of Congress have condemned China’s similar use of biometrics to track the Uyghur Muslim population in Xinjiang.
Of the new types of biometrics DHS plans to collect, DNA collection presents unique threats to privacy. Unlike other biometrics such as fingerprints, DNA contains our most private and personal information. DHS plans to collect DNA specifically to determine genetic family relationships and will store that relationship information with each DNA profile, thus allowing the agency to identify and map immigrant families and, eventually over time, whole immigrant communities. DHS suggests that it will store DNA data indefinitely and makes clear that it retains the authority to share this data with law enforcement. Sharing this data with law enforcement only increases the risk those required to give samples will be erroneously linked to a crime, while exacerbating problems related to the disproportionate number of people of color whose samples are included in government DNA databases.
Not only is the government’s increased collection of highly sensitive personal data troubling because of the ways the government might use it, but also because that data could end up in the hands of bad actors. Put simply, DHS has not demonstrated that it can keep biometrics safe. For example, just last month, DHS’s Office of Inspector General (OIG) found that the agency’s inadequate security practices enabled bad actors to steal nearly 200,000 travelers’ face images from a subcontractor’s computers. A Government Accountability Office report similarly “identified long-standing challenges in CBP’s efforts to develop and implement [its biometric entry and exit] system.” There have also been serious security breaches from insiders at USCIS. And other federal agencies have had similar challenges in securing biometric data: in 2015, sensitive data on more than 25 million people stored in the Office of Personnel Management databases was stolen. And, as the multiple security breaches of India’s Aadhar national biometric database have shown in the international context, these breaches can make millions of individuals subject to fraud and identity theft.
The risk of security breaches to children’s biometrics is especially acute. A recent U.S. Senate Commerce Committee report collects a number of studies that “indicate that large numbers of children in the United States are victims of identity theft.” Breaches of children’s biometric data further exacerbate this security risk because biometrics cannot be changed. As a recent UNICEF report explains, the collection of children’s biometric information exposes them to “lifelong data risks” that are not possible to presently evaluate. Never before has biometric information been collected from birth, and we do not know how the data collected today will be used in the future.First Amendment Risks
This massive collection of biometric data—and the danger that it could be leaked—places a significant burden on First Amendment activity. By collecting and retaining biometric data like face recognition and sharing it broadly with federal, state, and local agencies, as well as with contractors and foreign governments, DHS lays the groundwork for a vast surveillance and tracking network that could impact individuals and communities for years to come. DHS could soon build a database large enough to identify and track all people in public places, without their knowledge—not just in places the agency oversees, like at the border, but anywhere there are cameras. This burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups that are the most likely to encounter DHS.
If immigrants and their U.S. citizen and permanent resident family members know the government can request, retain, and share with other law enforcement agencies their most intimate biometric information at every stage of the immigration lifecycle, many may self-censor and refrain from asserting their First Amendment rights. Studies show that surveillance systems and the overcollection of data by the government chill expressive and religious activity. For example, in 2013, a study involving Muslims in New York and New Jersey found excessive police surveillance in Muslim communities had a significant chilling effect on First Amendment-protected activities.Problems with Biometric Technology
DHS’s decision to move forward with biometrics expansion is also questionable because the agency fails to consider the lack of reliability of many biometric technologies and the databases that store this information. One of the methods DHS proposes to employ to collect DNA, known as Rapid DNA, has been shown to be error prone. Meanwhile, studies have found significant error rates across face recognition systems for people with darker skin, and especially for Black women.
Moreover, it remains far from clear that collecting more biometrics will make DHS’s already flawed databases any more accurate. In fact, in a recent case challenging the reliability of DHS databases, a federal district court found that independent investigations of several DHS databases highlighted high error rates within the systems. For example, in 2017, the DHS OIG found that the database used for information about visa overstays was wrong 42 percent of the time. Other databases used to identify lawful permanent residents and people with protected status had a 30 percent error rate.DHS’s Flawed Justification
DHS has offered little justification for this massive expansion of biometric data collection. In the proposed rule, DHS suggests that the new system will “provide DHS with the improved ability to identify and limit fraud.” However, the scant evidence that DHS offers to demonstrate the existence of fraud cannot justify its expansive changes. For example, DHS purports to justify its collection of DNA from children based on the fact that there were “432 incidents of fraudulent family claims” between July 1, 2019 and November 7, 2019 along the southern border. Not only does DHS not define what constitutes a “fraudulent family,” but also it leaves out that during that same period, an estimated 100,000 family units crossed the southern border, meaning that the so-called “fraudulent family” units made up less than one-half of one percent of all family crossings. And we’ve seen this before: the Trump administration has a troubling record of raising false alarms about fraud in the immigration context.
In addition, DHS does not address the privacy costs discussed in depth above. The proposed rule merely notes that “[t]here could be some unquantified impacts related to privacy concerns for risks associated with the collection.” And of course, the changes would come at a considerable financial cost to taxpayers, at a time when USCIS is already experiencing fiscal challenges. Even with the millions of dollars in new fees USCIS will collect, the rule is estimated to cost anywhere from $2.25 to $5 billion over the next 10 years. DHS also notes that additional costs could manifest.Beyond DHS’s Mandate
Congress has not given DHS the authority to expand biometrics collection in this manner. When Congress has wanted DHS to use biometrics, it has said so clearly. For example, after 9/11, Congress directed DHS to “develop a plan to accelerate the full implementation of an automated biometric entry and exit data system.” But DHS can point to no such authorization in this instance. In fact, Congress is actively considering measures to restrict the government’s use of biometrics. It is not the place for a federal agency to supersede debate in Congress. Elected lawmakers must resolve these important matters through the democratic process before DHS can put forward a proposal like the proposed rule, which seeks to perform an end run around the democratic process.What’s Next
If DHS makes this rule final, Congress has the power to block it from taking effect. We hope that DHS will take seriously our comments. But if it doesn’t, Congress will be hearing from us and our members.Related Cases: Federal DNA CollectionFBI's Next Generation Identification Biometrics DatabaseDNA Collection
Imagine learning that you were wiretapped by law enforcement, but couldn’t get any information about why. That’s what happened to retired California Highway Patrol officer Miguel Guerrero, and EFF sued on his behalf to get more information about the surveillance. This week, a California appeals court ruled in his case that people who are targets of wiretaps are entitled to inspect the wiretap materials, including the order application and intercepted communications, if a judge finds that such access would be in the interests of justice. This is a huge victory for transparency and accountability in California courts.
This case arose from the grossly disproportionate volume of wiretaps issued by the Riverside County Superior Court in 2014 and 2015. In those years, wiretaps from that single, suburban county accounted for almost twice as many wiretaps as were issued in the rest of California combined, and almost one-fifth of all state and federal wiretaps issued nationwide. After journalists exposed Riverside County’s massive surveillance campaign, watchdog groups and even a federal judge warned that the sheer scale of the wiretaps suggested that the applications and authorizations violated federal law.
Guerrero learned from family members that his phone number was the subject of a wiretap order in 2015. Guerrero, a former law enforcement officer, has no criminal record, and was never arrested or charged with any crime in relation to the wiretap. And, although the law requires that targets of wiretaps receive notice within 90 days of the wiretap’s conclusion, he never received any such notice. He wanted to see the records both to inform the public and to assess whether to bring an action challenging the legality of the wiretap.
When we first went to court, the judge ruled that targets of wiretaps can unseal the wiretap application and order only by proving “good cause” for disclosure. The court then found that neither Guerrero’s desire to pursue a civil action nor the grossly disproportionate volume of wiretaps established good cause for disclosure, commenting that the number of wiretaps was “nothing more than routine.” The court further rejected our argument that the public has a First Amendment right of access to the wiretap order and application.
We appealed, and the Court of Appeal agreed that the trial court erred. The appeals court made clear that, under California law, the target of a wiretap need not show good cause. Instead, the target of a wiretap need only demonstrate that disclosure of the wiretap order and application is “in the interest of justice”—which unlike the good cause standard, does not include any presumption of secrecy.
Importantly, the court provided guidance for how to assess the “interest of justice” in this context, becoming one of the first courts in the nation to interpret this standard. As the court explained, the “interest of justice” analysis requires a court to consider the requester’s interest in access, the government’s interest in secrecy, and the interests of others intercepted persons, and, significantly, the public interest. In considering the public interest, the court explained, courts should consider the huge volume of wiretaps approved in Riverside County. The court specifically rejected the trial court’s assessment that those statistics, on their own, were irrelevant without an independent showing of nefarious conduct.
The case now returns to the trial court, where the judge must apply the Court of Appeal’s analysis. We hope Mr. Guerrero will finally get some answers.Related Cases: Riverside wiretaps
EFF Urges Vallejo’s Top Officials to End Unconstitutional Practice of Blocking Critics on Social Media
San Francisco—The Electronic Frontier Foundation (EFF) told the City of Vallejo that its practice of blocking people and deleting comments on social media because it doesn't like their messages is illegal under the First Amendment, and demanded that it stop engaging in such viewpoint discrimination, unblock all members of the public, and let them post comments.
In a letter today to Vallejo Mayor Bob Sampayan and the city council written on behalf of Open Vallejo, an independent news organization, and the Vallejo community at large, EFF said that when the government creates a social media page and uses it to speak to the public about its policies and positions, the page becomes a government forum. Under the First Amendment, the public has a right to receive and comment on messages in the forum. Blocking or deleting users’ comments based on the viewpoints expressed is unconstitutional viewpoint discrimination.
“Courts have made clear that government officials, even the president of the United States, can’t delete or block comments because they dislike the viewpoints conveyed,” said Naomi Gilens, Frank Stanton Legal Fellow at EFF. “Doing so is unconstitutional. We’ve asked that all official social media pages of Vallejo officials and pages of all city offices and departments unblock all members of the public and allow them to post comments.
Open Vallejo discovered the practice of deleting comments and blocking users during an investigation of the social media practices of council members, other city officials, and the City of Vallejo itself.
EFF sided with members of the public blocked by President Trump on Twitter who sued him and members of his communications team in July 2017. We filed an amicus brief arguing that it’s common practice for government offices large and small to use social media to communicate to and with the public. All members of the public, regardless of whether government officials dislike their posts or tweets, have a right to receive and comment on government messages, some of which may deal with safety directions during fires, earthquakes, or other emergencies.
The district court agreed, ruling that President Trump’s practice violates the First Amendment. A federal appeals court upheld the ruling. Two other federal Courts of Appeals have ruled in separate cases that viewpoint discrimination on government social media pages is illegal.
We urge Vallejo to bring its social media practices in line with the Constitution, and have requested that city officials respond to our demand by Nov. 6.
For the letter:
For more on social media and the First Amendment:
Contact: NaomiGilensFrank Stanton Fellownaomi@eff.org
It is a fundamental precept, at least in the United States, that the public should have access to the courts–including court records–and any departure from that rule must be narrow and well-justified. In a nation bound by the rule of law, the public must have the ability to know the law and how it is being applied. But for most of our nation’s history, that right didn’t mean much if you didn’t have the ability to get to the courthouse yourself.
In theory, the PACER (Public Access to Court Electronic Records) system should have changed all that. Though much-maligned for its user-unfriendly design, for more than 25 years PACER has made it possible for parties and the public to find all kinds of legal documents, from substantive briefs and judicial opinions to minor things like a notice of appearance. For those with the skill to navigate its millions of documents, PACER is a treasure trove of information about our legal system–and how it can be abused.
But using PACER takes more than skill–it takes money. Subject to some exceptions, the PACER system charges 10 cents a page to download a document, and that cost can add up fast. The money is supposed to cover the price of running the system, but has been diverted to cover other costs. And either way, those fees are an unfair barrier to access. Open access activists have tried for years to remedy the problem, and have managed to improve access to some of it. The government itself made some initial forays in the right direction a decade ago, but then retreated, claiming privacy and security concerns. A team of researchers has developed software, called RECAP, that helps users automatically search for free copies of documents, and helps build up a free alternative database. Nonetheless, today most of PACER remains locked behind a paywall.
It’s past time to tear that paywall down, and a bill now working its way through Congress, the bipartisan Open Courts Act of 2020, aims to do just that. The bill would provide public access to federal court records and improve the federal court’s online record system, eliminating PACER's paywall in the process. EFF and a coalition of civil liberties organizations, transparency groups, retired judges, and law libraries have joined together to push Congress and the U.S. Federal Courts to eliminate the paywall and expand access to these vital documents. In a letter (PDF) addressed to the Director of the Administrative Office of United States Courts, which manages PACER, the coalition calls on the AO not to oppose this important legislation.
Passage of the bill would be a huge victory for transparency, due process and democratic accountability. This Open Access Week, EFF urges Congress and the courts to support this important legislation and remove the barriers that make PACER a gatekeeper to information, rather than the open path to public records that it ought to be.
EFF is proud to celebrate Open Access Week.Related Cases: Freeing the Law with Public.Resource.Org
The sudden move to remote education by universities this year has forced the inevitable: the move to an online education. While most universities won’t be fully remote, having course materials online was already becoming the norm before the COVID-19 pandemic, and this year it has become mandatory for millions of educators and students. As academia recovers from this crisis, and hopefully prepares for the next one, the choices we make will send us down one of two paths. We can move towards a future of online education which replicates the artificial scarcity of traditional publishing, or take a path which fosters an abundance of free materials by embracing the principles of open access and open education.
The well-worn, hefty, out-of-date textbook you may have bought some years ago was likely obsolete the moment you had a reliable computer and an Internet connection. Traditional textbook publishers already know this, and tout that they have embraced the digital era and have ebooks and e-rentals available—sometimes even at a discount. Despite some state laws discouraging the practice, publishers try to bundle their digital textbooks into “online learning systems,” often at the expense of the student. However, the costs and time needed to copy and send thousands of the digital textbooks themselves is trivial compared to their physical equivalent.
To make matters worse, these online materials are often locked down with DRM which prevent buyers from sharing or reselling books; in turn, devastating the secondhand textbook market. This creates the absurd situation of ebooks, which are almost free to reproduce, being effectively more expensive than a physical book you plan to resell. Fortunately for all of us this scarcity is constructed, and there exists a more equitable and intuitive alternative.
Right now there is a global collaborative effort among the world's educators and librarians to provide high-quality, free, and up-to-date education materials to all with little restriction. This of course is the global movement towards open education resources (OER). While this tireless effort of thousands of academics may seem complicated, it revolves around a simple idea: Education is a fundamental human right, so if technology enables us to share, reproduce, and update educational materials so effectively that we can give them away for free—it’s our moral duty to do so.
This cornucopia of syllabuses, exams, textbooks, video lectures, and much more is already available and awaiting eager educators and students. This is thanks to the power of open licensing, typically the Creative Commons Attribution license (CC BY), which is the standard for open educational resources. Open licensing preserves your freedom to retain, reuse, revise, remix and redistribute educational materials. Much like free and open source licensing for code, these licenses help foster a collaborative ecosystem where people can freely use, improve, and recreate useful tools
Yet, most college students are still stuck on the path of prohibitively expensive and often outdated books from traditional publishers. While this situation is bad enough on its own, the COVID-19 pandemic has heightened the absurd and contradictory nature of this status quo. The structural equity offered by supporting OER is as clear and urgent as ever. Open Education, like all of open access more broadly, is a human rights issue.The Squeeze on Students and Instructors
How do college students cope with being assigned highly priced textbooks? Some are fortunate enough to buy them outright, or can at least scrape together to rent everything they need. Physical books are available, they can share copies, resell them, and buy used. Artificial book scarcity has fortunately already been addressed in many communities with well-funded libraries. Unfortunately the necessity for reducing social contact during the pandemic has made these physical options more difficult if not impossible to orchestrate. That leaves the most vulnerable students with the easiest and by far most common solution for navigating the predicament: hope that you don’t actually need the book and avoid the purchase all together.
Unsurprisingly, a student's performance is highly impacted by access to educational materials, and trying to catch up late in the semester is rarely viable. In short, these wholly artificial barriers to accessing necessary educational materials are setting up the most vulnerable students to choose between risking their grades, their health, or their wallet. Fortunately there is a growing number of institutions embracing OER, saving their students millions of dollars while making it possible for every student to succeed without any undue costs.
Instructors at universities have been feeling the pressure, too. With little support at most institutions, they were asked to prepare a fully online course for the fall, sometimes in addition to an in-person course plan. Studying this sudden pivot, Bay View Analytics estimates 97% of institutions had faculty teaching online for the first time, with 56% of instructors needing to adopt teaching methods they have never tried before.
Adapting a course to work online is not a trivial amount of work. Integrating technology into education often requires special training in pedagogy, the digital divide, and emerging privacy concerns. Even if it falls short of these trainings, having a selection of pre-made courses to freely adapt is an age-old academic practice which can relieve instructors from this burden. While this informal system of sharing among instructors may have provided some confidence, provided they knew others who have taught similar courses online, this is where the power of OER can really take hold.
Instead of being limited to who you know, the global community around OER offers a much broader variety of syllabuses and assignments for different teaching styles. As OER continues to grow, instructors will be more resilient and able to choose between the best materials the global OER community has to offer.Building Towards Education Equity
Despite the many benefits of open access and open education, most instructors have still never heard of OER. This means a simple first step away from an expensive and locked down system of education is to make sure you make the benefits of OER more widely known. While pushing for the broader utilization of OER, we must advocate for systemic changes to make sure OER is supported on every campus.
For this task, supporting public and private libraries is essential. Despite years of austerity cuts, many academic libraries have established hubs of well-curated OER resources, tailored for the needs of their institution. As just one example, OAISIS is a hub of OER at the State University of New York at Geneseo, where librarians maintain materials from over 500 institutions. As a greater number of educational materials utilize open licenses, it will be essential for librarians to help instructors navigate the growing number of options.
State legislatures are also increasingly introducing bills to address this issue, and we should all push our legislatures to do what’s right. Public funding should save students money and save teachers time, not deepen the divide between those who can and those who can’t access resources.
This is a lesson we cannot forget as we recover from the current crisis. Structural inequity and a system of artificial scarcity is nothing new, and it will still be there on the other side of the COVID-19 pandemic. Traditional publishers have restrained education too much for too long. We already have a future where education can be adaptable, collaborative, and free. Now is the time to reclaim it.
EFF is proud to celebrate Open Access Week 2020
Peru’s Third Who Defends Your Data? Report: Stronger Commitments from ISPs, But Imbalances, and Gaps to Bridge.
Hiperderecho, Peru’s leading digital rights organization, has launched today its third ¿Quién Defiende Tus Datos? (Who Defends you Data)--a report that seeks to hold telecom companies accountable for their users’ privacy. The new Peruvian edition shows improvements compared to 2019’s evaluation.
Movistar and Claro commit to require a warrant for handing both users’ communications content and metadata to the government. The two companies also earned credit for defending user’s privacy in Congress or for challenging government requests. None scored any star last year in this category. Claro stands out with detailed law enforcement guidelines, including an explanatory chart for the procedures the company adopts before law enforcement requests for communications data. However, Claro should be more specific about the type of communications data covered by the guidelines. All companies have received full stars for their privacy policies, while only three did so in the previous report. Overall, Movistar and Claro are tied in the lead. Entel and Bitel lag, with the former bearing a slight advantage.
Quien Defiende Tus Datos is part of a series across Latin America and Spain carried out in collaboration with EFF and inspired in our Who Has Your Back? Project. This year’s edition evaluates the four largest Internet Service Providers (ISPs) in Peru: Telefónica-Movistar, Claro, Entel, and Bitel.
Hiperderecho assessed Peruvian ISPs on seven criteria concerning privacy policies, transparency, user notification, judicial authorization, defense of human rights, digital security, and law enforcement guidelines. In contrast to last year, the report has added two new categories – if ISPs publish law enforcement guidelines and a category checking companies’ commitments to users’ digital security. The full report is available in Spanish, and here we outline the main results:
Regarding transparency reports, Movistar leads the way, earning a full star while Claro receives a partial star. The report had to provide useful data about how many requests received and how many times the company complied. It should also include details about the government agencies that made the requests and the authority’s justifications. For the first time, Claro has provided statistical figures on government demands that require the “lifting of the secrecy of communication (LST).” However, Claro has failed to clarify which type of data (IP addresses and other technical identifiers) is protected under this legal regime. Since Peru's Telecommunications Law and its regulation protect under communications secrecy both the content and personal information obtained through the provision of telecom services, we assume Claro might include both. Yet, as a best practice, the ISP should be more explicit about the type of data, including technical identifiers, protected under communication secrecy. As Movistar does, Claro should also break down government requests’ statistical data in content interception and metadata.
Movistar and Claro have published their law enforcement guidelines. While Movistar only released a general global policy applicable to its subsidiaries, Claro stands out with detailed guidelines for Peru, including an explanatory chart for the company’s procedures before law enforcement requests for communications data. On the downside, the document broadly refers to "lifting the secrecy of communication" requests without defining what it entails. It should give users greater insight into which kind of data is included in the outlined procedures and whether they are mostly focused on authorities' access to communications content or refer to specific metadata requests.
Entel, Bitel, Claro, and Movistar have published privacy policies applicable to their services that are easy to understand. All of the ISP’s policies provide information about the collected data (such as name, address, and records related to the service provision) and cases in which the company shares personal data with third parties. Claro and Movistar receive full credit in the judicial authorization category for having policies or other documents indicating their commitment to request a judicial order before handing communications data unless the law mandates otherwise. Similarly, Entel states they share users' data with the government in compliance with the law. Peruvian law grants the specialized police investigation unit the power to request from telecom operators access to metadata in specific emergencies set by Legislative Decree 1182, with a subsequent judicial review.
Latin American countries still have a long way ahead in shedding enough light on government surveillance practices. Publishing meaningful transparency reports and law enforcement guidelines are two critical measures that companies should commit to. Users’ notification is the third one. In Peru, none of the ISPs have committed to notifying users of a government request at the earliest moment allowed by law. Yet, Movistar and Claro have provided further information on their reasons and their interpretation of the law for this refusal.
In the digital security category, all companies have received credit for using HTTPS on their websites and for providing secure methods to users in their online channels, such as two-step authentication. All companies but Bitel have scored for the promotion of human rights. While Entel receives a partial score for joining local multi-stakeholder forums, Movistar and Claro fill up their stars for this category. Among others, Movistar has sent comments to Congress in favor of user’s privacy, and Claro has challenged such a disproportionate request issued by the country’s tax administration agency (SUNAT) before Peru’s data protection authority.
We are glad to see that Peru’s third report shows significant progress, but much needed to be done to protect users’ privacy. Entel and Bitel have to catch up with the larger regional providers. And Movistar and Claro can also go further to complete their chart of stars. Hiperderecho will remain vigilant through their ¿Quien Defiende Tus Datos? Reports.