Electronic Freedom Foundation
Privacy and security are both team sports, and no one person or organization completely changes the landscape alone. This is why coalition-building is often at the heart of activism. In 2019, EFF was one of the ten organizations that founded the Coalition Against Stalkerware, a group of security companies, non-profit organizations, and academic researchers that support survivors of domestic abuse by working together to address technology-enabled abuse and raise awareness about the threat posed by stalkerware. Among its early achievements are an effort to create an industry-wide definition of stalkerware, encouraging research into the proliferation of stalkerware, and convincing anti-virus companies to detect and report the presence of stalkerware as malicious or unwanted programs.
Stalkerware is the class of apps that are sold commercially for the purpose of covertly spying on another person’s device. They can be blatantly marketed as tools for “catching a cheating spouse” or they may euphemistically describe themselves as tools for tracking your children or employees’ devices. The key defining feature of stalkerware is that it is designed to operate covertly, to trick the user into believing that they are not being monitored.
Less than a year after its founding, the coalition has more than doubled in size. The original ten partners (Avira, Electronic Frontier Foundation, the European Network for the Work with Perpetrators of Domestic Violence, G DATA Cyber Defense, Kaspersky, Malwarebytes, The National Network to End Domestic Violence, NortonLifeLock, Operation Safe Escape, and WEISSER RING) have been joined by AEquitas with its Stalking Prevention, Awareness, and Resource Center (SPARC), Anonyome Labs, AppEsteem Corporation, bff Bundesverband Frauenberatungsstellen und Frauennotrufe, Centre Hubertine Auclert, Copperhead, Corrata, Commonwealth Peoples’ Association of Uganda, Cyber Peace Foundation, F-Secure, and Illinois Stalking Advocacy Center. The coalition is especially excited about adding organizations in India and Uganda, because stalkerware is a global problem that requires global solutions beyond the countries and regions represented by the coalition’s founding organizations.
The Coalition has also produced an explanatory video, which describes common indicators to check for if a user thinks their device has been infected with stalkerware. The video is available in six languages: English, Spanish, Italian, German, French, and Portuguese. The Coalition’s website also contains resources detailing what stalkerware is, how it works, how to detect it, and how to protect your devices, as well as contact information for many local victims’ services organizations. Coalition members have also released documentation and evidence collection apps (DocuSAFE and NO STALK), for use on trusted devices to collect, store, and share evidence of abuse with law enforcement and survivor support organizations.
The fight is just beginning, but norms are already changing.
EFF Joins Coalition to Call on the EU to Introduce Interoperability Rules
Today, EFF sent a joint letter to European Commission Executive Vice-President Margrethe Vestager, highlighting the enormous potential of interoperability to help achieve the EU’s goals for Europe’s digital future. EFF joins a strong coalition of organizations representing European civil society organizations, entrepreneurs, and SMEs. We are calling on the European Commission to consider the role interoperability can play in ensuring that technology creates a fair and competitive economy and strengthens an open, democratic and sustainable society. Specifically, we urge the Commission to include specific measures requiring interoperability of large Internet platforms in the forthcoming Digital Services Act package. This will strengthen user empowerment and competition in the European digital single market.
Interoperability mandates will enable users to exercise greater control over their online experiences. No longer confronted with the binary choice of either staying on dominant platforms that do not serve their needs or losing access to their social network, users will be able to choose freely the tools that best respect their privacy, security, or accessibility preferences. Interoperability rules will also be crucial to ensure a dynamic market in which new entrants and innovative business models will have a fair shot to convince users of their value.
The upcoming Digital Services Act is a crucial, and rare, opportunity to achieve these goals. This is not just any regulatory reform, but the most significant reform project the European Union has undertaken in two decades. And we intend to fight for users’ rights, transparency, anonymity, and limited liability for online platforms every step of the way.
Guided by our policy principles, we will work with and support the Commission in its efforts to develop a Digital Services Act that best addresses the challenges for Europe’s Digital Future and will submit detailed responses to the public consultation now underway.
The full text of the letter to Executive Vice-President Margrethe Vestager is available here.
Should the police be able to force Google to turn over identifying information on every phone within a certain geographic area—potentially hundreds or thousands of devices—just because a crime occurred there? We don’t think so. As we argued in an amicus brief filed recently in People v. Dawes, a case in San Francisco Superior Court, this is a general search and violates the Fourth Amendment.
The court is scheduled to hear the defendant’s motion to quash and suppress evidence on July 7, 2020.
In 2018, police in San Francisco were trying to figure out who robbed a house in a residential neighborhood. They didn’t have a suspect. Instead of using traditional investigative techniques to find the culprit, they turned to a new surveillance tool that’s been gaining interest from police across the country—a “geofence warrant.”
Unlike traditional warrants for electronic records, a geofence warrant doesn’t start with a suspect or even an account; instead it directs Google to search a vast database of location history information to identify every device (for which Google has data) that happened to be in the area around the time of the crime, regardless of whether the device owner has any link at all to the crime under investigation. Because these investigations start with a location before they have a suspect, they are also frequently called “reverse location” searches.
Google has a particularly robust, detailed, and searchable collection of location data, and, to our knowledge, it is the only company that complies with these warrants. Much of what we know about the data Google provides to police and how it provides that data comes from a declaration and an amicus brief it filed in a Virginia case called United States v. Chatrie. According to Google, the data it provides to police comes from its database called “Sensorvault,” where it stores location data for one of its services called “Location History.” Google collects Location History data from different sources, including wifi connections, GPS and Bluetooth signals, and cellular networks. This makes it much more precise than cell site location information and allows Google to estimate a device’s location to within 20 meters or less. This precision also allows Google to infer where a user has been (such as to a ski resort), what they were doing at the time (such as driving), and the path they took to get there.
Location History is offered to users on both Android and IOS devices, but users must opt in to data collection. Google states that only about one-third of its users have opted in to Location History, but this represents “numerous tens of millions of Google users.”
Police have been increasingly seeking access to this treasure trove of data over the last few years via geofence warrants. These warrants reportedly date to 2016, but Google states that it received 1500% more geofence warrants in 2018 than 2017 and 500% more in 2019 than in 2018. According to the New York Times, the company received as many as 180 requests in a single week in 2019.
Geofence warrants typically follow a similar multi-stage process, which appears to have been created by Google. For the first stage, law enforcement identifies one or more geographic areas and time periods relevant to the crime. The warrant then requires Google to provide information about any devices, identified by a numerical identifier, that happened to be in the area within the given time period. Google says that, to comply with this first stage, it must search through its entire store of Location History data to identify responsive data—data on tens of millions of users, nearly all of whom are located well outside the geographic scope of the warrant. Google has also said that the volume of data it produces at this stage depends on the size and nature of the geographic area and length of time covered by the warrant, which vary considerably from one request to another, but the company once provided the government with identifying information for nearly 1,500 devices.
After Google releases the initial de-identified pool of responsive data, police then, in the second stage, demand Google provide additional location history outside of the initially defined geographic area and time frame for a subset of users that the officers, at their own discretion, determine are “relevant” to their investigation. Finally, in the third stage, officers demand that Google provide identifying information for a smaller subset of devices, including the user’s name, email address, device identifier, phone number and other account information. Again, officers rely solely on their own discretion to determine this second subset and which devices to target for further investigation.
There are many problems with this kind of a search. First, most of the information provided to law enforcement in response to a geofence warrant does not pertain to individuals suspected of the crime. Second, as not all device owners have opted in to Location History, search results are both over and under inclusive. Finally, Google has said there is only an estimated 68% chance that the user is actually where Google thinks they are, so the users Google identifies in response to a geofence warrant may not even be within the geographic area defined by the warrant (and therefore are outside the scope of the warrant).
Unsurprisingly, these problems have led to investigations that ensnare innocent individuals. In one case, police sought detailed information about a man in connection with a burglary after seeing his travel history in the first step of a geofence warrant. However, the man’s travel history was part of an exercise tracking app he used to log months of bike rides—rides that happened to take him past the site of the burglary. Investigators eventually acknowledged he should not have been a suspect, but not until after the man hired an attorney and after his life was upended for a time.
This example shows why geofence warrants are so pernicious and why they violate the Fourth Amendment. They lack particularity because they don’t properly and specifically describe an account or a person’s data to be seized, and they result in overbroad searches that can ensnare countless people with no connection to the crime. These warrants leave it up to the officers to decide for themselves, based on no concrete standards, who is a suspect and who isn’t.
The Fourth Amendment was written specifically to prevent these kinds of broad searches.
As we argued in Dawes, a geofence warrant is a digital analog to the “general warrants” issued in England and Colonial America that authorized officers to search anywhere they liked, including people or homes —simply on the chance that they might find someone or something connected with the crime under investigation. The chief problem with searches like this is that they leave too much of the search to the discretion of the officer and can too easily result in general exploratory searches that unreasonably interfere with a person’s right to privacy. The Fourth Amendment’s particularity and probable cause requirements as well as the requirement of judicial oversight were designed to prevent this.
Reverse location searches are the antithesis of how our criminal justice system is supposed to work. As with other technologies that purport to pull a suspect out of thin air—like face recognition, predictive policing, and genetic genealogy searches—there’s just too high a risk they will implicate an innocent person, shifting the burden of proving guilt from the government to the individual, who now has to prove their innocence. We think these searches are unconstitutional, even with a warrant.Carpenter v. United States
The day before a committee debate and vote on the EARN IT Act, the bill’s sponsors replaced their bill with an amended version. Here’s their new idea: instead of giving a 19-person federal commission, dominated by law enforcement, the power to regulate the Internet, the bill now effectively gives that power to state legislatures.
And instead of requiring that Internet websites and platforms comply with the commission’s “best practices” in order to keep their vital legal protections under Section 230 for hosting user content, it simply blows a hole in those protections. State lawmakers will be able to create new laws allowing private lawsuits and criminal prosecutions against Internet platforms, as long as they say their purpose is to stop crimes against children.
The whole idea behind Section 230 is to make sure that you are responsible for your own speech online—not someone else’s. Currently, if a state prosecutor wants to bring a criminal case related to something said or done online, or a private lawyer wants to sue, in nearly all cases, the prosecutor has to seek out the actual speaker. They can’t just haul a website owner into court because of the user’s actions. But that will change if EARN IT passes. That’s why we sent a letter [PDF] yesterday to the Senate Judiciary Committee opposing the amended EARN IT bill.
Section 230 protections enabled the Internet as we know it. Despite the politicized attacks on Section 230 from both left and right, the law actually works fine. It’s not a shield for Big Tech—it’s a shield for everyone who hosts online conversations. It protects small messaging and email services, and every blog’s comments section.
Once websites lose Section 230 protections, they’ll take drastic measures to mitigate their exposure. That will limit free speech across the Internet. They’ll shut down forums and comment sections, and cave to bogus claims that particular users are violating the rules, without doing a proper investigation. We’ve seen false accusations succeed in silencing users time and again in the copyright space, and even used to harass innocent users. If EARN IT passes, the range of possibilities for false accusations and censorship will expand.EARN IT Still Threatens Encryption
When we say the original EARN IT was a threat to encryption, we’re not guessing. We know that a commission controlled by Attorney General William Barr will try to ban encryption, because Barr has said many times that he thinks encrypted services should be compelled to create backdoors for police. The Manager’s Amendment, approved by the Committee today, doesn’t eliminate this problem. It just empowers over 50 jurisdictions to follow Barr’s lead in banning encryption.
An amendment by Sen. Patrick Leahy (D-VT), also voted into the bill, purports to protect encryption from being the states’ focus. It’s certainly an improvement, but we’re still concerned that the amended bill could be used to attack encryption. Sen. Leahy’s amendment prohibits holding companies liable because they use “end-to-end encryption, device encryption, or other encryption services.” But the bill still encourages state lawmakers to look for loopholes to undermine end-to-end encryption, such as demanding that messages be scanned on a local device, before they get encrypted and sent along to their recipient. We think that would violate the spirit of Senator Leahy’s amendment, but the bill opens the door for that question to be litigated over and over, in courts across the country.
And again, this isn’t a theoretical problem. The idea of using “client-side scanning” to allow certain messages to be selected and sent to the government, circumventing the protections of end-to-end encryption, is one we’ve heard a lot of talk about in the past year. Despite the testimonials of certain experts who have sided with law enforcement, the fact is, client-side scanning breaks the protections of encryption. The EARN IT Act doesn’t stop client-side scanning, which is the most likely strategy for state lawmakers who want to use this bill to expand police powers in order to read our messages.
And it will only take one state to inspire a wave of prosecutions and lawsuits against online platforms. And just as some federal law enforcement agencies have declared they’re opposed to encryption, so have some state and local police.
The previous version of the bill suggested that if online platforms want to keep their Section 230 immunity, they would need to “earn it,” by following the dictates of an unelected government commission. But the new text doesn’t even give them a chance. The bill’s sponsors simply dropped the “earn” from EARN IT. Website owners—especially those that enable encryption—just can’t “earn” their immunity from liability for user content under the new bill. They’ll just have to defend themselves in court, as soon as a single state prosecutor, or even just a lawyer in private practice, decides that offering end-to-end encryption was a sign of indifference towards crimes against children.
Offering users real privacy, in the form of end-to-end encrypted messaging, and robust platforms for free speech shouldn’t produce lawsuits and prosecutions. The new EARN IT bill will do just that, and should be opposed.
What Is AMP?
The Google AMP project was announced in late 2015 with the promise of providing publishers a faster way of serving and distributing content to their users. This also was marketed as a more adaptable approach than Apple News and Facebook Instant Articles. AMP pages began making an appearance by 2016. But right away, many observed that AMP encroached on the principles of the open web. The web was built on open standards, developed through consensus, that small and large actors alike can use. Which, in this case, entails keeping open web standards in the forefront and discouraging proprietary, closed standards.
Instead of utilizing standard HTML markup tags, a developer would use AMP tags. For example, here’s what an embedded image looks like in classic HTML, versus what it looks like using AMP:
HTML Image Tag:
<img src=”src.jpg” alt=”src image” />
AMP Image Tag:
<amp-img src=”src.jpg” width=”900” height=”675” layout=”responsive” />
Since launch page speeds have proven to be faster when using AMP, the technology’s promises aren’t necessarily bad from a speed perspective alone. Of course, there are ways of improving performance other than using AMP, such as minimizing files, building lighter code, CDNs (content delivery networks), and caching. There are also other Google-backed frameworks like PWAs (progressive web applications) and service workers.
AMP has been around for four years now, and the criticisms still carry into today with AMP’s latest progressions around a very important part of the web, the URL.
Canonical URLs and AMP URLs
When you visit a site, maybe your favorite news site, you would normally see the original domain along with an associated path to the page you are on:
This, along with it’s SSL certificate would clarify that you are seeing web content served from this site at this URL with a good amount of trust. This is what would be considered a canonical URL.
An AMP URL, however, can look like this:
Using canonical URLs, users can more easily verify that the site they’re on is the one they’re trying to visit. But AMP URLs muddied the waters, and made users have to adapt new ways to verify the origins of original content.
One step further is their structure for pre-rendered pages from cached content. This URL would not be in view of the user, but rather the content (text, images, etc.) served onto the cached page would be coming from the URL below.
The final URL, the one in view or the URL bar, of a cached AMP page would look something like this:
This cache model does not follow the web origin concept and creates a new framework and structure to adhere to. The promise is better performances and experience for users. Yet, the approach is implementation first and web standards later. Since Google has become such an ingrained part of the modern web for so many, any technology they deploy would immediately have a large share of users and adopters. This is also paired with other arguments other product teams within Google have made to reshape the URL as we know it. This fundamentally changed the way the mobile web is served for many users.
Another, more recent development is the support for Signed HTTP Exchanges, or “SXG”, a subset of the Web Packages standard that allows further decoupling of distribution of web content from its origins with cryptographically signed HTTP exchanges (a web page). This is supposed to address the problem, introduced by AMP, that the URL a user sees does not correspond to the page they’re trying to visit. SXG allows the canonical URL (instead of the AMP URL) to be shown in the browser when you arrive, closing the loop back to the original publisher. The positive here is that a web standard was used, but the negative here is the speed of adoption without general consensus from other major stakeholders. Currently, SXG is only supported in Chrome and Chromium based browsers.
Pushing AMP: how did a new “standard” take over?
News publishers were among the first to adopt AMP. Google even partnered with a major CMS (content management system), WordPress, to further promote AMP. Publishers use CMS services to upload, edit, and host content, and WordPress holds about 60% of the market share as the CMS of choice. Publishers also compete on other Google products, such as Google Search. So perhaps some publishers adopted AMP because they thought it would improve SEO (search engine optimization) on one of the web’s most used search engines. However, this argument has been disputed by Google, and they maintain that performance is prioritized no matter what is used to get that page result to that performance measure. Since the Google Search algorithm is mainly in secret, we can only trust these statements at their word. Tangentially, the “Top Stories” feature in Search on mobile has recently dropped AMP as a requirement.
The AMP project was more closed off in terms of control in the beginning of it’s launch despite the fact it promoted itself as an open source project. Publishers ended up reporting higher speeds, but this was left up to a “time will tell” set of metrics. In conclusion, the statement “you don’t need AMP to rank higher” is often competing with “just use AMP and you will rank higher”. Which can be tempting to publishers trying to reach the performance bar to get their content prioritized.
Web Community First
We should focus less about whether or not AMP is a good tool for performance, and more about how this framework was molded by Google’s initial ownership. The cache layer is owned by Google, and even though it’s not required, most common implementations use this cache feature. Concerns around analytics have been addressed and they have also done the courtesy of allowing other major ad vendors into the AMP model concerning ad content. This is a mere concession though, since Google Analytics has such a large market share of the measured web.
However, this is all well after AMP has been integrated into major news sites, e-commerce, and even nonprofits. So this new model is not an even-ground, democratic approach. No matter the intentions, good or bad, those who work with powerful entities need to check their power at the door if they want a more equitable and usable web. Not acknowledging the power one wields, only enforces a false sense of democracy that didn’t exist.
Furthermore, the web standards process itself is far from perfect. Standards organizations are heavily dominated by members of corporate companies and the connections one may have to them offer immense social capital. Less-represented people don’t have the social capital to join or be a member. It’s a long way until a more equitable process occurs for these types of organizations; paired with the lack of diversity these kinds of groups tend to have, the costs of membership, and time commitments. These particular issues are not Google’s fault, but Google has an immense amount of power when it comes to joining these groups. When joining standards organizations, It’s not a matter of earning their way up, but deciding if they should loosen their reigns.
At this point in time with the AMP project, Google can’t retroactively release the control it had in AMP’s adoption. And we can’t go back to a pre-AMP web to start over. The discussions about whether the AMP project should be removed, or discouraged for a different framework, have long passed. Whether or not users can opt-out of AMP has been decided in many corners of the web. All we can do now is learn from the process, and try to make sure AMP is developed in the best interests of users and publishers going forward. However, the open web shouldn’t be weathered by multiple lessons learned on power and control from big tech companies that obtusely need to re-learn accountability with each new endeavor.
Amazon’s Ring Enables the Over-Policing Efforts of Some of America’s Deadliest Law Enforcement Agencies
Ring, Amazon’s “smart” doorbell camera company, recently began sharing statistics on how many video requests police departments submit to users, and the numbers are staggering. In the first quarter of 2020 alone, police requested videos over 5000 times, using their partnerships with the company to email users directly and ask them to share private videos from their Ring devices.
It’s unclear how many video requests were successful, as users have the option to deny the requests; however, even a small percentage of successful requests would mean potentially thousands of videos shared with police per year. With a warrant, police could also circumvent the device’s owner and get footage straight from Amazon, even if the owner denied the police.
But this isn’t the only disturbing number: roughly half of the agencies that now partner with Ring also have been responsible for at least one fatal encounter in the last five years, an analysis of data from Ring, Fatal Encounters, and Mapping Police Violence shows. In fact, those agencies have been responsible for over a third of fatal police encounters nationwide. 1
If Amazon is truly concerned about “systemic racism and injustice,” it must immediately recognize the danger of building a digital porch-to-police pipeline
Concerns about police violence must be tied to concerns about growing police surveillance and the lack of accountability that comes with it. Ring partnerships facilitate police access to video and audio footage from massive numbers of doorbell cameras aimed at public spaces such as sidewalks—a feature that could conceivably be used to identify participants in a protest through a neighborhood. They create a high-speed digital mechanism by which users can make snap judgements about who does, and who does not, belong in their neighborhood—and (sometimes dangerously) summon police to confront them. These partnerships also make it all too easy for law enforcement to harass, arrest, or detain someone who is simply exercising their legal rights to free expression, for example, by walking through a neighborhood or canvassing for a political campaign.Ring Is More Concerned With Profit Than Safety or Privacy
Alongside a growing number of civil liberties organizations and elected officials, EFF has expressed concern with these partnerships since they were initially reported two years ago. Since then, they have grown nearly exponentially, reaching over 1400 agencies after adding 600 in the last six months alone.
At a time when there’s more criticism than ever of the dangerous ways in which law enforcement interacts with their communities, and as citizens and companies alike are questioning the ways that they facilitate those interactions, why would Ring double down on partnerships with police? The answer is simple: profit.
Every time a community app leads to a 911 call on a person of color—whether a neighbor or a visitor—it puts that person at risk of being harassed or even killed by the police
While at least some technology companies, and their employees, are reexamining their close relationships with law enforcement, Ring continues to intentionally blur the line between what’s best for the safety of a community and what’s best for its bottom line. Ring has gone so far as to draft press statements and social media posts for police to promote Ring cameras and write talking points to convince users to hand over their footage, creating a vicious cycle in which police promote the adoption of Ring, Ring terrifies people into thinking their homes are in danger, and Amazon sells more cameras. This arrangement makes salespeople out of what should be impartial and trusted protectors of our civic society. In some instances, local police departments even get credit toward buying cameras they can distribute to residents.
Every time a community app leads to a 911 call on a person of color—whether a neighbor or a visitor—it puts that person at risk of being harassed or even killed by the police. Research shows that users are more likely to report Black people as engaging in suspicious activities on social networks like Neighbors or Nextdoor. The dangers aren’t just theoretical: Ring and other neighborhood watch apps have both been under fire for years for increasing paranoia, normalizing surveillance, and facilitating reporting of so-called “suspicious” behavior that ultimately amounts to racial profiling.
A Black real estate agent was stopped by police because neighbors on Nextdoor thought it was “suspicious” for him to ring a doorbell. A man was shot and killed by sheriff’s deputies the same night a woman’s Ring camera captured video of him on her porch and that she subsequently shared to the Neighbors app. These apps essentially streamline police escalation and increase the likelihood of violent interactions. But far from pushing users to be cautious and careful when reporting behavior, Ring has moved in the opposite direction, going so far as to gamify the reporting of suspicious activity and offer free products in exchange for reports.
Ring isn’t the only surveillance equipment used by these agencies, of course. A more detailed survey (which we are also working on in our Atlas of Surveillance) would no doubt indicate the use of other dangerous technologies, like face recognition, automated license plate readers, and drones—but Ring partnerships are particularly insidious. Importantly, Ring hasn’t been shown to decrease crime. But it does allow for the expansion of unaccountable mass surveillance by law enforcement. By asking residents (and private companies) to do the technological work of policing, Ring minimizes transparency and police accountability; you can't send a public records request to Amazon. And if the owner of a Ring camera refuses to hand over their video to police, officers can still go straight to Amazon with a warrant and ask them for the footage, circumventing the camera’s owner.
A relationship with Ring, an increasing desire for door-front surveillance, and a responsiveness to paranoid reports from these apps are all indicators of a type of over-policing that puts officers more frequently into contact with vulnerable populations. Amazon Ring should not be partnering with any law enforcement agency—but it’s especially concerning how many of the agencies they’ve partnered with are responsible for deaths in their communities in the last few years.Ring Must Admit the Danger of These Partnerships
In June, Nextdoor, the neighborhood watch and social networking app which competes with Ring’s Neighbors app, announced it would end a “Forward to Police” feature, which allows users to share their posts or urgent alerts directly with law enforcement. The termination of this feature is part of the company’s “anti-racism work” and its “efforts to make Nextdoor a place where all neighbors feel welcome.”
This change from Nextdoor is a good sign—but it is only a very small step in the right direction. Unlike Nextdoor’s Forward to Police feature, which Nextdoor says was only used by a small percentage of law enforcement agencies, the growth of police-Ring partnerships is not showing any signs of slowing down. This must change. Citing the current protests against policing, Amazon halted the sale of its face recognition tool, Rekognition, to police for one year—but that signal means nothing to the public if the company continues to expedite police access to home surveillance footage.
There’s no evidence Ring surveillance makes neighborhoods safer, but there is evidence that it can make police less accountable, invade community privacy, and stoke the fires of racial prejudice. If Amazon is truly concerned about “systemic racism and injustice,” as the company claims, it must immediately recognize the danger of building a digital porch-to-police pipeline, and end these invasive, reckless Ring-police partnerships.
- 1. Ring currently has partnerships with 1403 law enforcement agencies. A cursory analysis of data from Mapping Police Violence (MPV) from 2015 to the present identified 559 of these agencies as also having been responsible for at least one police-involved death (40%), while in the same time period, the Fatal Encounters dataset identified 695 such agencies as having been responsible for at least one police-involved death (50%). Additionally, MPV reports 6084 deaths, of which agencies with Ring partnerships accounted for 2165, while Fatal Encounters reports 9635 deaths, with 3382 involving agencies with Ring partnerships (both equal to 35% of total deaths).
Hundreds of Police Departments with Deadly Histories Partner with Amazon’s Ring Surveillance Cameras
San Francisco – Research by the Electronic Frontier Foundation (EFF) shows that hundreds of U.S. police departments with deadly histories have official partnerships with Amazon’s Ring—a home-surveillance company that makes it easy to send video footage to law enforcement.
Ring sells networked cameras, often bundled with doorbells or lighting, that record video when they register movement and then send notifications to owners’ cell phones. Ring’s partnerships allow police to seek access to private video footage directly from residents through a special web portal. Ring now works with over 1400 agencies, adding 600 in the last six months alone. An analysis of data from Ring, Fatal Encounters, and Mapping Police Violence shows that roughly half of the agencies that Ring has partnered with had fatal encounters in the last five years. In fact, those departments have been responsible for over a third of fatal police encounters nationwide, including the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos, and Sean Monterrosa.
“At a time when communities are more concerned than ever before about their relationship with law enforcement, these partnerships encourage an atmosphere of mistrust and could allow for near-constant surveillance by local police,” said EFF Digital Strategist Jason Kelley. “These partnerships make it all too easy for law enforcement to harass, arrest, or detain someone who is simply exercising their legal rights to free expression—for example, by walking through a neighborhood, protesting in their local community, or canvassing for a political campaign.”
Recently, Nextdoor’s Forward to Police feature, which allows users to share their posts or urgent alerts directly with law enforcement, was ended as part of the company’s “anti-racism” work. EFF calls on Ring to do the same by ending their partnerships with law enforcement agencies.
“People across the nation are calling for policing reform,” said Kelley. “Amazon has acknowledged this important movement, and stopped selling its face recognition tool called Rekognition to police for one year. But that concession means nothing to the public if the company continues to expedite police access to home surveillance footage through Ring.”
For the full report:
For EFF’s petition to Amazon:
"There's an old saying in Tennessee — I know it's in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can't get fooled again." -President George W Bush
Anti-monopoly enforcement has seen a significant shift since the 1970s. Where the Department of Justice once routinely brought suits against anticompetitive mergers, today, that's extremely rare, even between giant companies in highly concentrated industries. (The strongest remedy against a monopolist—breaking them up altogether—is a relic of the past). Regulators used to go to court to block mergers to prevent companies from growing so large that they could abuse their market power. In place of blocking mergers, today's competition regulators like to add terms and conditions to them, exacting promises from companies to behave themselves after the merger is complete. This safeguard continues to enjoy popularity with competition regulators, despite the fact that companies routinely break their public promises to safeguard users’ privacy and rarely face consequences for doing so. (These two facts may be related!) When they do get sanctioned, the punishment almost never exceeds the profits from the broken promise. "A fine is a price." Today, we'd like to propose a modest, incremental improvement to this underpowered deterrent:If a company breaks its promise, and then it makes the same promise when seeking approval for a merger, we should not believe it.
Read on for three significant broken promises we'd be fools to believe again.
"There's a sucker born every minute" -Traditional (often misattributed to PT Barnum)
Amazon promised not to use data from the sellers on its platform to launch competing products. It lied. In the summer of 2019, Amazon's General Counsel Nate Sutton made the company's position crystal clear when he told Congress, "We don't use individual seller data directly to compete." In April, the Wall Street Journal spoke to 20 former Amazon employees who said they did exactly this, confirming the suspicions of Amazon sellers, who'd been told that it was just a coincidence that the world's largest online retailer kept cloning the most successful products on its platform.
"Insanity is doing the same thing over and over again, but expecting different results." -Rita Mae Brown (often misattributed to Albert Einstein)
In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn't harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they're peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.
"Once is happenstance. Twice is coincidence. The third time it’s enemy action." -Ian Fleming
In 2007, Google bought DoubleClick, a web advertising network. It promised not to merge advertising data with Google user profiles. It lied. Like most Big Tech companies, much of Google's growth comes from buying smaller companies. Because Google's primary revenue source is targeted advertising, these mergers inevitably raise questions about data-mining. As Google’s role in online advertising is under scrutiny, antitrust enforcers must not accept a new “We mean it, this time. Seriously, guys” promise to protect user data. It would be easy to argue that promises made in a formal settlement with antitrust enforcers will carry more weight than mere public statements about what a company will do. But those public promises can keep customers unaware, and enforcers away, as the company extends its dominance. Those promises have to mean something. Hiding privacy abuses behind false promises “ ">effectively raises rivals’ costs, as they try to compete against what appears to be high quality, but is, in truth, low quality.” That could raise antitrust liability. An overhaul to competition enforcement is long past due, but while that's happening, can we take the absolute smallest step toward prudent regulation and stop believing liars when they tell the same lies?
The government recently released a superseding indictment against Wikileaks editor in chief Julian Assange, currently imprisoned and awaiting extradition in the United Kingdom. As we’ve written before, this prosecution poses a clear threat to journalism, and, whether or not Assange considers himself a journalist, the indictment targets routine journalistic practices such as working with and encouraging sources during an investigation.
While considering the superseding indictment, it’s useful to look at some of the features carrying over from the previous version. Through much of the indictment, the government describes and refers back to a page on the Wikileaks website describing the “Most Wanted Leaks of 2009.” The implication in the indictment is that Wikileaks was actively soliciting leaks with this Most Wanted Leaks list, but the government is leaving out a crucial piece of nuance about the Most Wanted Leaks page: Unlike much of Wikileaks.org, the Most Wanted Leaks was actually a publicly-editable wiki.
Rather than viewing this document as a wishlist generated by Wikileaks staff or a reflection of Assange’s personal priorities, we must understand that this was a publicly-generated list developed by contributors who felt each “wanted” document offered information that would be valuable to the public.
Archives of the page show evidence of the editable nature of the document:
The Most Wanted Leaks page shows that it has visible “edit” links, similar to what one might find on any wiki page.
Language on the page says that one can "securely and anonymously" add your nomination by directly editing the page:
And goes on to encourage contributors to "simply click "edit" on the country below" to make changes to the page:
While we don’t know how many people contributed to the page at the time, the different formatting and writing styles across the page support the idea that this page was edited by many different people. But the government’s indictment, which names this document no less than 14 times and dedicates multiple pages to describing it, never explains the crowd-sourced nature of the Most Wanted Leaks document.
It’s easy to understand why. The government prosecutors are trying to paint a picture of Assange as a mastermind soliciting leaks, and is charging him with violating computer crime law and the Espionage Act. It doesn’t suit their narrative to show Wikileaks as a host for a crowdsourced page where activists, scholars, and government accountability experts from across the globe could safely and anonymously offer their feedback on the transparency failures of their own governments. But as we analyze the indictment, it’s important that we keep this context in mind. It’s overly simplistic to describe the Most Wanted Leaks list, as the government does in its indictment, as “ASSANGE’s solicitation of classified information made through the Wikileaks website" or a way "to recruit individuals to hack into computers and/or illegally obtain and disclose classified information Wikileaks." This framing excises the role of the untold number of contributors to this page, and lacks an understanding of how modern wikis and distributed projects work.
We’ve long argued that working with sources to obtain classified documents of public importance is a well-established and integral part of investigative journalism and protected by the First Amendment. Even if Assange had himself written and posted everything on the Most Wanted Leaks page, then the First Amendment would protect his right to do so. There is no law in the United States that would prevent a publisher from publicly listing the types of stories or leaks they would like to be made publicly. But that’s not what happened here—in this case, Wikileaks was providing a forum where contributors from around the world could identify documents and data they felt were important to be made public. And the First Amendment clearly protects the rights of websites to host a public forum of that nature.
Many of the documents on the Most Wanted Leaks page are of clear public interest. Some of the documents requested by editors of the page include:
- Lists of domains that are censored (or on proposed or voluntary censorship lists) in China, Australia, Germany, and the U.K.
- In Austria, the source code for e-Voting systems used in student elections.
- Documents detailing the Vatican’s interactions with Nazi Germany.
- Profit-sharing agreements between the Ugandan government and oil companies
- PACER - the United States’ federal court record search database.
While today it’s in the government’s interest to paint Wikileaks as a rogue band of hackers directed by Assange, the Most Wanted Leaks page epitomizes one of the most important features of Wikileaks: that as a publisher, it served the public interest. Wikileaks served activists, human rights defenders, scholars, reformers, journalists and other members of the public. With the Most Wanted Leaks page, it gave members of the public a platform to speak anonymously about documents they believed would further public understanding. It’s an astonishingly thoughtful and democratic way for the public to educate and communicate their priorities to potential whistleblowers, those in power, and other members of the public.
The ways Wikileaks served and furthered the public interest doesn’t fit the prosecution’s litigation strategy. If Assange goes to court to combat the Espionage charges he is facing, he may well be prevented from discussing the public interest and impact of Wikileaks’ publication history. That’s because the Espionage Act, passed in 1917, pre-dated modern communications technology and was never designed as a tool to crack down on investigative journalists and their publishers. There’s no public interest defense to the Espionage Act, and those charged under the Espionage Act may have no chance to even explain their motivation or the impact—good or bad—of their actions.
Assange’s arrest in April 2019 was based on a single charge, under the Computer Fraud and Abuse Act (CFAA), arising from a single, unsuccessful attempt to crack a password. At the time, it was clear to us that the government’s CFAA charge was being used as a thin cover for attacking the journalism. The original May 2019 Superseding Indictment added 17 additional charges to the CFAA charge, and clarifying it was charging both conspiracy and a direct violation. In the Second Superseding Indictment, however, the direct CFAA charge is gone, leaving the charge of Conspiracy to Commit Computer Intrusion. The government removed the paragraphs specifying the password cracking as the particular factual grounds, now basing this Count vaguely on the acts “described in the [27 page] General Allegations Section of this Indictment.”
Removing the direct CFAA charge does not make this indictment any less dangerous as an attack on journalism. These General Allegations include many normal journalistic practices, all essential to modern reporting: communications over secure chat service, transferring files with cloud services, removing usernames and logs to protect the source’s identity, and, now in the Second Superseding Indictment, having a crowd-sourced list of documents that the contributors believed would further public understanding. By removing the factual specificity, the Second Superseding Indictment only deepens the chilling effect on journalists who use modern tools to report on matters of public interest.
Regardless of how you feel about Assange as a person, we should all be concerned about his prosecution. If found guilty, the harm won’t be just to Assange himself—it will be to every journalist and news outlet that will face legal uncertainty for working with sources to publish leaked information. And a weakened press ultimately hurts the public’s ability to access truthful and relevant information about those in power. And that is directly against the public interest.
 A superseding indictment means that the government is replacing its original charges with new, amended charges.
 The Most Wanted Leaks document was also submitted in Chelsea Manning’s trial. https://www.sandiegouniontribune.com/sdut-wikileaks-most-wanted-list-admitted-in-trial-2013jul01-story.html
Related Cases: Bank Julius Baer & Co v. Wikileaks
Special thanks to legal intern Rachel Sommers, who was the lead author of this post.
Visa applicants to the United States are required to disclose personal information including their work, travel, and family histories. And as of May 2019, they are required to register their social media accounts with the U.S. government. According to the State Department, approximately 14.7 million people will be affected by this new policy each year.
EFF recently filed an amicus brief in Doc Society v. Pompeo, a case challenging this “Registration Requirement” under the First Amendment. The plaintiffs in the case, two U.S.-based documentary film organizations that regularly collaborate with non-U.S. filmmakers and other international partners, argue that the Registration Requirement violates the expressive and associational rights of both their non-U.S.-based and U.S.-based members and partners. After the government filed a motion to dismiss the lawsuit, we filed our brief in district court in support of the plaintiffs’ opposition to dismissal.
In our brief, we argue that the Registration Requirement invades privacy and chills free speech and association of both visa applicants and those in their social networks, including U.S. persons, despite the fact that the policy targets only publicly available information. This is amplified by the staggering number of social media users affected and the vast amounts of personal information they publicly share—both intentionally and unintentionally—on their social media accounts.
Social media profiles paint alarmingly detailed pictures of their users’ personal lives. By monitoring applicants’ social media profiles, the government can obtain information that it otherwise would not have access to through the visa application process. For example, visa applicants are not required to disclose their political views. However, applicants might choose to post their beliefs on their social media profiles. Those seeking to conceal such information might still be exposed by comments and tags made by other users. And due to the complex interactions of social media networks, studies have shown that personal information about users such as sexual orientation can reliably be inferred even when the user doesn’t expressly share that information. Although consular officers might be instructed to ignore this information, it is not unreasonable to fear that it might influence their decisions anyway.
Just as other users’ online activity can reveal information about visa applicants, so too can visa applicants’ online activity reveal information about other users, including U.S. persons. For example, if a visa applicant tags another user in a political rant or posts photographs of themselves and the other user at a political rally, government officials might correctly infer that the other user shares the applicant’s political beliefs. In fact, one study demonstrated that it is possible to accurately predict personal information about those who do not use any form of social media based solely on personal information and contact lists shared by those who do. The government’s surveillance of visa applicants’ social media profiles thus facilitates the surveillance of millions—if not billions—more people.
Because social media users have privacy interests in their public social media profiles, government surveillance of digital content risks chilling free speech. If visa applicants know that the government can glean vast amounts of personal information about them from their profiles—or that their anonymous or pseudonymous accounts can be linked to their real-world identities—they will be inclined to engage in self-censorship. Many will likely curtail or alter their behavior online—or even disengage from social media altogether. Importantly, because of the interconnected nature of social media, these chilling effects extend to those in visa applicants’ social networks, including U.S. persons.
Studies confirm these chilling effects. Citizen Lab found that 62 percent of survey respondents would be less likely to “speak or write about certain topics online” if they knew that the government was engaged in online surveillance. A Pew Research Center survey found that 34 percent of its survey respondents who were aware of the online surveillance programs revealed by Edward Snowden had taken at least one step to shield their information from the government, including using social media less often, uninstalling certain apps, and avoiding the use of certain terms in their digital communications.
One might be tempted to argue that concerned applicants can simply set their accounts to private. Some users choose to share their personal information—including their names, locations, photographs, relationships, interests, and opinions—with the public writ large. But others do so unintentionally. Given the difficulties associated with navigating privacy settings within and across platforms and the fact that privacy settings often change without warning, there is good reason to believe that many users publicly share more personal information than they think they do. Moreover, some applicants might fear that setting their accounts to private will negatively impact their applications. Others—especially those using social media anonymously or pseudonymously—might be loath to maximize their privacy settings because they use their platforms with the specific intention of reaching large audiences.
These chilling effects are further strengthened by the broad scope of the Registration Requirement, which allows the government to continue surveilling applicants’ social media profiles once the application process is over. Personal information obtained from those profiles can also be collected and stored in government databases for decades. And that information can be shared with other domestic and foreign governmental entities, as well as current and prospective employers and other third parties. It is no wonder, then, that social media users might severely limit or change the way they use social media.
Secrecy should not be a prerequisite for privacy—and the review and collection by the government of personal information that is clearly outside the scope of the visa application process creates unwarranted chilling effects on both visa applicants and their social media associates, including U.S. persons. We hope that the D.C. district court denies the government’s motion to dismiss the case and ultimately strikes down the Registration Requirement as unconstitutional under the First Amendment.
COVID-19 has pushed millions of people to work from home, and a flock of companies offering software for tracking workers has swooped in to pitch their products to employers across the country.
The services often sound relatively innocuous. Some vendors bill their tools as “automatic time tracking” or “workplace analytics” software. Others market to companies concerned about data breaches or intellectual property theft. We’ll call these tools, collectively, “bossware.” While aimed at helping employers, bossware puts workers’ privacy and security at risk by logging every click and keystroke, covertly gathering information for lawsuits, and using other spying features that go far beyond what is necessary and proportionate to manage a workforce.
This is not OK. When a home becomes an office, it remains a home. Workers should not be subject to nonconsensual surveillance or feel pressured to be scrutinized in their own homes to keep their jobs.What can they do?
Bossware typically lives on a computer or smartphone and has privileges to access data about everything that happens on that device. Most bossware collects, more or less, everything that the user does. We looked at marketing materials, demos, and customer reviews to get a sense of how these tools work. There are too many individual types of monitoring to list here, but we’ll try to break down the ways these products can surveil into general categories.
The broadest and most common type of surveillance is “activity monitoring.” This typically includes a log of which applications and websites workers use. It may include who they email or message—including subject lines and other metadata—and any posts they make on social media. Most bossware also records levels of input from the keyboard and mouse—for example, many tools give a minute-by-minute breakdown of how much a user types and clicks, using that as a proxy for productivity. Productivity monitoring software will attempt to assemble all of this data into simple charts or graphs that give managers a high-level view of what workers are doing.
Every product we looked at has the ability to take frequent screenshots of each worker’s device, and some provide direct, live video feeds of their screens. This raw image data is often arrayed in a timeline, so bosses can go back through a worker’s day and see what they were doing at any given point. Several products also act as a keylogger, recording every keystroke a worker makes, including unsent emails and private passwords. A couple even let administrators jump in and take over remote control of a user’s desktop. These products usually don’t distinguish between work-related activity and personal account credentials, bank data, or medical information.
InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.”
Some bossware goes even further, reaching into the physical world around a worker’s device. Companies that offer software for mobile devices nearly always include location tracking using GPS data. At least two services—StaffCop Enterprise and CleverControl—let employers secretly activate webcams and microphones on worker devices.
There are, broadly, two ways bossware can be deployed: as an app that’s visible to (and maybe even controllable by) the worker, or as a secret background process that workers can’t see. Most companies we looked at give employers the option to install their software either way.Visible monitoring
Sometimes, workers can see the software that is surveilling them. They may have the option to turn the surveillance on or off, often framed as “clocking in” and “clocking out.” Of course, the fact that a worker has turned off monitoring will be visible to their employer. For example, with Time Doctor, workers may be given the option to delete particular screenshots from their work session. However, deleting a screenshot will also delete the associated work time, so workers only get credit for the time during which they are monitored.
Workers may be given access to some, or all, of the information that’s collected about them. Crossover, the company behind WorkSmart, compares its product to a fitness tracker for computer work. Its interface allows workers to see the system’s conclusions about their own activity presented in an array of graphs and charts.
Different bossware companies offer different levels of transparency to workers. Some give workers access to all, or most, of the information that their managers have. Others, like Teramind, indicate that they are turned on and collecting data, but don’t reveal everything they’re collecting. In either case, it can often be unclear to the user what data, exactly, is being collected, without specific requests to their employer or careful scrutiny of the software itself.Invisible monitoring
The majority of companies that build visible monitoring software also make products that try to hide themselves from the people they’re monitoring. Teramind, Time Doctor, StaffCop, and others make bossware that’s designed to be as difficult to detect and remove as possible. At a technical level, these products are indistinguishable from stalkerware. In fact, some companies require employers to specifically configure antivirus software before installing their products, so that the worker’s antivirus won’t detect and block the monitoring software’s activity.
This kind of software is marketed for a specific purpose: monitoring workers. However, most of these products are really just general purpose monitoring tools. StaffCop offers a version of their product specifically designed for monitoring children’s use of the Internet at home, and ActivTrak states that their software can also be used by parents or school officials to monitor kids’ activity. Customer reviews for some of the software indicate that many customers do indeed use these tools outside of the office.
Most companies that offer invisible monitoring recommend that it only be used for devices that the employer owns. However, many also offer features like remote and “silent” installation that can load monitoring software on worker computers, without their knowledge, while their devices are outside the office. This works because many employers have administrative privileges on computers they distribute. But for some workers, the company laptop they use is their only computer, so company monitoring is ever-present. There is great potential for misuse of this software by employers, school officials, and intimate partners. And the victims may never know that they are subject to such monitoring.
The table below shows the monitoring and control features available from a small sample of bossware vendors. This isn’t a comprehensive list, and may not be representative of the industry as a whole; we looked at companies that were referred to in industry guides and search results that had informative publicly-facing marketing materials.
Table: Common surveillance features of bossware products
Activity monitoring (apps, websites)
Screenshots or screen recordings
Webcam/ microphone activation
Can be made "invisible"
Features of several worker-monitoring products, based on the companies’ marketing material. 9 of the 10 companies we looked at offered “silent” or “invisible” monitoring software, which can collect data without worker knowledge.How common is bossware?
The worker surveillance business is not new, and it was already quite large before the outbreak of a global pandemic. While it’s difficult to assess how common bossware is, it’s undoubtedly become much more common as workers are forced to work from home due to COVID-19. Awareness Technologies, which owns InterGuard, claimed to have grown its customer base by over 300% in just the first few weeks after the outbreak. Many of the vendors we looked at exploit COVID-19 in their marketing pitches to companies.
Some of the biggest companies in the world use bossware. Hubstaff customers include Instacart, Groupon, and Ring. Time Doctor claims 83,000 users; its customers include Allstate, Ericsson, Verizon, and Re/Max. ActivTrak is used by more than 6,500 organizations, including Arizona State University, Emory University, and the cities of Denver and Malibu. Companies like StaffCop and Teramind do not disclose information about their customers, but claim to serve clients in industries like health care, banking, fashion, manufacturing, and call centers. Customer reviews of monitoring software give more examples of how these tools are used.
Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical.
We don’t know how many of these organizations choose to use invisible monitoring, since the employers themselves don’t tend to advertise it. In addition, there isn’t a reliable way for workers themselves to know, since so much invisible software is explicitly designed to evade detection. Some workers have contracts that authorize certain kinds of monitoring or prevent others. But for many workers, it may be impossible to tell whether they’re being watched. Workers who are concerned about the possibility of monitoring may be safest to assume that any employer-provided device is tracking them.What is the data used for?
Bossware vendors market their products for a wide variety of uses. Some of the most common are time tracking, productivity tracking, compliance with data protection laws, and IP theft prevention. Some use cases may be valid: for example, companies that deal with sensitive data often have legal obligations to make sure data isn’t leaked or stolen from company computers. For off-site workers, this may necessitate a certain level of on-device monitoring. But an employer should not undertake any monitoring for such security purposes unless they can show it is necessary, proportionate, and specific to the problems it’s trying to solve.
Unfortunately, many use cases involve employers wielding excessive power over workers. Perhaps the largest class of products we looked at are designed for “productivity monitoring” or enhanced time tracking—that is, recording everything that workers do to make sure they’re working hard enough. Some companies frame their tools as potential boons for both managers and workers. Collecting information about every second of a worker’s day isn’t just good for bosses, they claim—it supposedly helps the worker, too. Other vendors, like Work Examiner and StaffCop, market themselves directly to managers who don’t trust their staff. These companies often recommend tying layoffs or bonuses to performance metrics derived from their products.
Some firms also market their products as punitive tools, or as ways to gather evidence for potential worker lawsuits. InterGuard advertises that its software “can be silently and remotely installed, so you can conduct covert investigations [of your workers] and bullet-proof evidence gathering without alarming the suspected wrongdoer.” This evidence, it continues, can be used to fight “wrongful termination suits.” In other words, InterGuard can provide employers with an astronomical amount of private, secretly-gathered information to try to quash workers’ legal recourse against unfair treatment.
None of these use cases, even the less-disturbing ones discussed above, warrant the amount of information that bossware usually collects. And nothing justifies hiding the fact that the surveillance is happening at all.
Most products take periodic screenshots, and few of them allow workers to choose which ones to share. This means that sensitive medical, banking, or other personal information are captured alongside screenshots of work emails and social media. Products that include keyloggers are even more invasive, and often end up capturing passwords to workers’ personal accounts.
Unfortunately, excessive information collection often isn’t an accident, it’s a feature. Work Examiner specifically advertises its product’s ability to capture private passwords. Another company, Teramind, reports on every piece of information typed into an email client—even if that information is subsequently deleted. Several products also parse out strings of text from private messages on social media so that employers can know the most intimate details of workers’ personal conversations.
Let’s be clear: this software is specifically designed to help employers read workers’ private messages without their knowledge or consent. By any measure, this is unnecessary and unethical.What can you do?
Under current U.S. law, employers have too much leeway to install surveillance software on devices they own. In addition, little prevents them from coercing workers to install software on their own devices (as long as the surveillance can be disabled outside of work hours). Different states have different rules about what employers can and can’t do. But workers often have limited legal recourse against intrusive monitoring software.
That can and must change. As state and national legislatures continue to adopt consumer data privacy laws, they must also establish protections for workers with respect to their employers. To start:
- Surveillance of workers—even on employer-owned devices—should be necessary and proportionate.
- Tools should minimize the information they collect, and avoid vacuuming up personal data like private messages and passwords.
- Workers should have the right to know what exactly their managers are collecting.
- And workers need a private right of action, so they can sue employers that violate these statutory privacy protections.
In the meantime, workers who know they are subject to surveillance— and feel comfortable doing so—should engage in conversations with their employers. Companies that have adopted bossware must consider what their goals are, and should try to accomplish them in less-intrusive ways. Bossware often incentivizes the wrong kinds of productivity—for example, forcing people to jiggle their mouse and type every few minutes instead of reading or pausing to think. Constant monitoring can stifle creativity, diminish trust, and contribute to burnout. If employers are concerned about data security, they should consider tools that are specifically tailored to real threats, and which minimize the personal data caught up in the process.
Many workers won’t feel comfortable speaking up, or may suspect that their employers are monitoring them in secret. If they are unaware of the scope of monitoring, they should consider that work devices may collect everything—from web history to private messages to passwords. If possible, they should avoid using work devices for anything personal. And if workers are asked to install monitoring software on their personal devices, they may be able to ask their employers for a separate, work-specific device from which private information can be more easily siloed away.
Finally, workers may not feel comfortable speaking up about being surveilled out of concern for staying employed in a time with record unemployment. A choice between invasive and excessive monitoring and joblessness is not really a choice at all.
COVID-19 has put new stresses on us all, and it is likely to fundamentally change the ways we work as well. However, we must not let it usher in a new era of even-more-pervasive monitoring. We live more of our lives through our devices than ever before. That makes it more important than ever that we have a right to keep our digital lives private—from governments, tech companies, and our employers.
When individuals and companies are wrongly accused of patent infringement, they should be encouraged to stand up and defend themselves. When they win, the public does too. While the patent owner loses revenue, the rest of society gets greater access to knowledge, product choice, and space for innovation. This is especially true when defendants win by proving the patent asserted against them is invalid. In such cases, the patent gets cancelled, and the risk of wrongful threats against others vanishes.
The need to encourage parties to pursue meritorious defenses, is partly why patent law gives judges the power to force losers to pay a winner’s legal fees in “exceptional” patent cases. The fee-shifting allowed in patent cases is especially important because there are so many invalid patents in the possession of patent trolls, which are entities that exploit the exorbitant costs of litigating in federal court to scare defendants into paid settlements. When patent trolls abuse the litigation system, judges have to make sure that they pay a price. That’s why the selective fee-shifting that happens in patent cases is so important.
However, proving invalidity in district court takes a lot of time and money. That’s why Congress created a faster, cheaper alternative when it passed the America Invents Act in 2011.
That alternative is the IPR system, which allows parties to get a decision on a patent’s validity in less expensive, streamlined, proceedings at the Patent Office. One benefit of this system is the huge savings to parties and courts of avoiding needless patent litigation. Another is that going to the Patent Office should, in theory, yield more accurate decisions. When Congress created the IPR system, the whole point was to encourage parties to use it to make patent litigation cheaper and faster while improving the quality of issued patents by allowing the Patent Office to weed out those it shouldn’t have granted.
Fee-shifting and IPR are both meant to deter meritless patent lawsuits. That’s why we weighed in last year in a case called Dragon Intellectual Property v. Dish Network. In April, a panel of Federal Circuit agreed with our position. It’s a win for properly applied fee-shifting in the patent system, and for every company that wants to fight back after being hit with a meritless patent threat.
In the Dragon v. Dish case, a federal circuit tried to stop defendant Dish Network from getting its fees because of Dish’s success using the Patent Office’s inter partes review (IPR) system. That’s right—Dish was penalized for winning.
In this case, the district court saw that a party was successful at proving invalidity in an IPR—but then actually held that against the winning party. Dish Networks was one of several defendants Dragon sued for infringement. After the suit was filed, Dish initiated IPR proceedings. But before those proceedings finished, the District Court construed the patent’s claims in a way that required finding non-infringement. While that decision was on appeal, the Patent Office finished its review and found Dragon’s patent invalid.
Yet when Dish tried to recover the cost of litigating Dragon’s invalid patent, the District Court refused on the ground that Dish wasn’t the prevailing party. Oddly, its success proving invalidity at the Patent Office became grounds for stripping it of separately prevailing on non-infringement.
That ruling made no sense; if anything, Dish’s success in proving invalidity should reinforce, rather than undo, its status as the prevailing party. It would have also created a big new downside for defendants considering IPR proceedings: if they won, they could have lost prevailing party status in district court, and thus the possibility of recovering the cost of paying their attorneys. So EFF weighed in, filing an amicus brief in support of Dish’s prevailing party status with the Federal Circuit in February of 2019.
More than a year later, on April 21, 2020, the Federal Circuit finally ruled, agreeing with EFF that the district court’s finding of non-infringement made Dish the prevailing party. Dish’s parallel success in proving invalidity at the Patent Office did not change that. The Federal Circuit’s decision makes clear that proving invalidity at the Patent Office doesn’t make an earlier non-infringement win—and thus the possibility of recovering attorneys’ fees— disappear. That principle is important: If patent owners could save themselves from fee awards by having their patents invalidated by the Patent Office, they would have a perverse incentive to assert the worst patents in litigation.
But a month after the Federal Circuit issued its decision, Dragon filed a petition asking the full court to convene and re-hear the case. On June 24, the Federal Circuit finally denied that petition, making its decision final.
Even though the Federal Circuit was skeptical that fees would ultimately be recoverable in this case, its decision will help protect the IPR system, as well as proper fee-shifting. Those who need an efficient way to challenge a wrongly-granted patent will have one, and it won’t make the cost of district court litigation even greater.
This month, Americans are out in the streets, demanding police accountability. But rather than consider reform proposals, a key Senate committee is focused on giving unprecedented powers to law enforcement—including the ability to break into our private messages by creating encryption backdoors.
This Thursday, the Senate Judiciary Committee is scheduled to debate and vote on the so-called EARN IT Act, S. 3398. It’s a law that would allow the government to scan every message sent online. The EARN IT Act creates a 19-person commission that would be dominated by law enforcement agencies, with Attorney General William Barr at the top. This unelected commission will be authorized to make new rules on “best practices” that Internet websites will have to follow. Any Internet platform that doesn’t comply with this law enforcement wish list will lose the legal protections of Section 230.
The new rules that Attorney General Barr creates won’t just apply to social media or giant Internet companies. Section 230 is what protects owners of small online forums, websites, and blogs with comment sections, from being punished for the speech of others. Without Section 230 protections, platform owners and online moderators will have every incentive to over-censor speech, since they could potentially be sued out of existence based on someone else’s statements.
Proponents of EARN IT say that the bill isn’t about encryption or privacy. They’re cynically using crimes against children as an excuse to change online privacy standards. But it’s perfectly clear what the sponsors’ priorities are. Sen. Lindsay Graham (R-SC), one of EARN IT’s cosponsors, has introduced another bill that’s a direct attack on encrypted messaging. And Barr has said over and over again that encrypted services should be forced to offer police special access.
The EARN IT Act could end user privacy as we know it. Tech companies that provide private, encrypted messaging could have to re-write their software to allow police special access to their users’ messages.
This bill is a power grab by police. We need your help to stop it today. Contact your Senator and tell them to oppose the EARN IT Act.
Cities and states across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so. From Boston to San Francisco, elected officials and activists rightfully know that face surveillance gives police the power to track us wherever we go, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in First Amendment protected activities.
That’s why we’re asking you to contact your elected officials and tell them to co-sponsor and vote yes on the Facial Recognition and Biometric Technology Moratorium Act of 2020.
TELL congress: END federal use of face surveillance
Three companies—IBM, Amazon, and Microsoft—have recently ended or suspended sales of face recognition to police departments, acknowledging the harms that this technology causes. Police and other government use of this technology cannot be responsibly regulated. Face surveillance in the hands of the government is a fundamentally harmful technology. Congress, states, and cities should take this momentary reprieve, during which police will not be able to acquire new face surveillance technology from these companies, as an opportunity to ban government use of the technology once and for all.
Face surveillance disproportionately hurts vulnerable communities. Recently the New York Times published a long piece on the case of Robert Julian-Borchak Williams, who was arrested by Detroit police after face recognition technology wrongly identified him as a suspect in a theft case. The ACLU filed a complaint on his behalf with the Detroit police. The problem isn’t just that studies have found face recognition highly inaccurate when it comes to matching the faces of people of color. The larger concern is that law enforcement will use this invasive and dangerous technology, as it unfortunately uses all such tools, to disparately surveil people of color.
This federal ban on face surveillance would apply to opaque and increasingly powerful agencies like Immigration and Customs Enforcement, the Drug Enforcement Administration, the Federal Bureau of Investigation, and Customs and Border Patrol. The bill would ensure that these agencies cannot use this invasive technology to track, identify, and misidentify millions of people.
Tell your senators and representatives they must co-sponsor and pass the Facial Recognition and Biometric Technology Moratorium Act of 2020, introduced by Senators Markey and Markley and Representatives Ayanna Pressley, Pramila Jayapal, Rashda Talib, and Yvette Clarke. This bill would be a critical step to ensuring that mass surveillance systems don’t use your face to track, identify, or incriminate you. The bill would ban the use of face surveillance by the federal government, as well as withhold certain federal funding streams from local and state governments that use the technology. That’s why we’re asking you to insist your elected officials co-sponsor and vote “Yes” on the Facial Recognition and Biometric Technology Moratorium Act of 2020, S.4084 in the Senate.
TELL congress: END federal use of face surveillance
Security researchers have been talking about the vulnerabilities in 2G for years. 2G technology, which at one point underpinned the entire cellular communications network, is widely known to be vulnerable to eavesdropping and spoofing. But even though its insecurities are well-known and it has quickly become archaic, many people still rely on it as the main mobile technology, especially in rural areas. Even as carriers start rolling out the fifth generation of mobile communications, known as 5G, 2G technology is still supported by modern smartphones.
The manufacturers of operating systems for smartphones (e.g. Apple, Google, and Samsung) are in the perfect position to solve this problem by allowing users to switch off 2G.What is 2G and why is it vulnerable?
2G is the second generation of mobile communications, created in 1991. It’s an old technology that at the time did not consider certain risk scenarios to protect its users. As years have gone, many vulnerabilities have been discovered in 2G and it’s companion SS7.
The primary problem with 2G stems from two facts. First, it uses weak encryption between the tower and device that can be cracked in real time by an attacker to intercept calls or text messages. In fact, the attacker can do this passively without ever transmitting a single packet. The second problem with 2G is that there is no authentication of the tower to the phone, which means that anyone can seamlessly impersonate a real 2G tower and your phone will never be the wiser.
Cell-site simulators sometimes work this way. They can exploit security flaws in 2G in order to intercept your communications. Even though many of the security flaws in 2G have been fixed in 4G, more advanced cell-site simulators can take advantage of remaining flaws to downgrade your connection to 2G, making your phone susceptible to the above attacks. This makes every user vulnerable—from journalists and activists to medical professionals, government officials, and law enforcement.How do we fix it?
3G, 4G, and 5G deployments fix the worst vulnerabilities in 2G that allow for cell-site simulators to eavesdrop on SMS text messages and phone calls (though there are still some vulnerabilities left to fix). Unfortunately, many people worldwide still depend on 2G networks. Therefore, brand-new, top-of-the-line phones on the market today—such as Samsung Galaxy, Google Pixel, and the iPhone 11—still support 2G technology. And the vast majority of these smartphones don’t give users any way to switch off 2G support. That means these modern 3G and 4G phones are still vulnerable to being downgraded to 2G.
The simplest solution for users is to use encrypted messaging such as Signal whenever possible. But a better solution would be to be able to switch 2G off entirely so the connection can’t be downgraded. Unfortunately, this is not an option in iPhones or most Android Phones.
Apple, Google, and Samsung should allow users to choose to switch 2G off in order to better protect ourselves. Ideally, smartphone OS makers would block 2G by default and allow users to turn it back on if they need it for connectivity in a remote area. Either way, with this simple action, Apple, Google, and Samsung could protect millions of their users from the worst harms of cell-site simulators.
The Brazilian Senate is scheduled to make its vote this week on the most recent version of “PLS 2630/2020” the so-called “Fake News” bill. This new version, supposedly aimed at safety and curbing “malicious coordinated actions'' by users of social networks and private messaging apps, will allow the government to identify and track countless innocent users who haven't committed any wrongdoing in order to catch a few malicious actors.
The bill creates a clumsy regulatory regime to intervene in the technology and policy decisions of both public and private messaging services in Brazil, requiring them to institute new takedown procedures, enforce various kinds of identification of all their users, and greatly increase the amount of information that they gather and store from and about their users. They also have to ensure that all of that information can be directly accessed by staff in Brasil, so it is directly and immediately available to their government—bypassing the strong safeguards for users’ rights of existing international mechanisms such as Mutual Legal Assistance Treaties.
This sprawling bill is moving quickly, and it comes at a very bad time. Right now, secure communication technologies are more important than ever to cope with the COVID-19 pandemic, to collaborate and work securely, and to protest or organize online. It’s also really important for people to be able to have private conversations, including private political conversations. There are many things wrong with this bill, far more than we could fit into one article. For now, we’ll do a deep dive into five serious flaws in the existing bill that would undermine privacy, expression and security.Flaw 1: Forcing Social Media and Messaging Companies to Collect Legal Identification of All Users
The new draft of Article 7 is both clumsy and contradictory. First, the bill (Article 7, paragraph 3) requires “large” social networks and private messaging apps (that offer service in Brazil to more than two million users) to identify every account’s user by requesting their national identity cards. It’s a retroactive and general requirement, meaning that identification must be requested for each and every existing user. Article 7 main provision is not limited to the identification of a user by a court order, also including when there is a complaint about an account’s activity, or when the company finds itself unsure of a user’s identity. While users are explicitly permitted to use pseudonyms, they may not keep their legal identities confidential from the service provider. Compelling companies to identify an online user should only be done in response to a request by a competent authority, not a priori. In India, a similar proposal is expected to be released by the country’s IT Ministry, although reports indicate that ID verification would be optional.
In 2003, Brazil made SIM card registration mandatory for prepaid cell phones, requiring prepaid subscribers to present a proof of identity, such as their official national identity card, driver’s license, or taxpayer number. Article 39 of the new draft expands that law by creating new mandatory identification requirements for obtaining telephone SIM cards, and Article 8 explicitly requires private message applications that identify their users via an associated telephone number to delete accounts whenever the underlying telephone number is deregistered. Telephone operators are required to help with this process by providing a list of numbers that are no longer used by the original subscriber. SIM card registration undermines peoples’ ability to communicate, organize, and associate with others anonymously. David Kaye, United Nations’ Special Rapporteur on Freedom of Expression and Opinion have asked states to refrain from making the identification of users a condition for access to digital communications and online services and requiring SIM card registration for mobile users;
Even if the draft text eliminates Article 7, the draft remains dangerous to free expression because authorities will still be allowed to identify users of private messaging services by linking a cell phone number to an account. The Brazilian authorities will have to unmask the identity of the internet user by following domestic procedures for accessing such data from the telecom provider.
Internet users will be obliged to hand over identifying information to big tech companies if Article 7 is approved as currently written, with or without paragraph 3. The compulsory identification provision is a blatant infringement on the due process rights of individuals. Countries like China and South Korea have mandated that users register their real names and identification numbers with online service providers. South Korea used to require websites with more than 100,000 visitors per day to authenticate their identities by entering their resident ID numbers when they use portals or other sites. But South Korea’s Supreme Court revoked the law as unconstitutional, stating that "the [mandatory identification] system does not seem to have been beneficial to the public. Despite the enforcement of the system, the number of illegal or malicious postings online has not decreased.”
Article 10 compels social networks and private messaging applications to retain the chain of all communications that have been “massively forwarded”, for the purpose of potential criminal investigation or prosecution. The new draft requires three months of data storage of the complete chain of communication for such messages, including date and time of forwarding, and the total number of users who receive the message. These obligations are conditioned on virality thresholds and apply when an instance of a message has been forwarded to groups or lists by more than 5 users within 15 days, where a message’s content has reached 1,000 or more users. The service provider is also apparently expected to temporarily retain this data for all forwarded messages during the 15-day period in order to determine whether or not the virality threshold for “massively forwarded” will be met. This provision blatantly infringes on due process rights by compelling providers to retain everyone’s communication before anyone has committed any legally defined offense.
There have also been significant changes to how this text interacts with encryption and with communications’ providers efforts to know less about what their users are doing. These mandatory retention requirements may create an incentive to weaken end-to-end encryption, because end-to-end encrypted services may not be able to comply with provisions requiring them to recognize when a particular message has been independently forwarded a certain number of times without undermining the security of their encryption.
Although the current draft (unlike previous versions) does not create new crimes, it requires providers to trace messages before any crime has been committed so the information could be used in the future in the context of a criminal investigation or prosecution of crimes for specific crimes defined in articles 138 to 140, or article 147 of the Brazil’s Penal Code, such as defamation, threats, and calúnia. This means, for example, that if you share a message that denounces corruption of a local authority and it gets forwarded more than 1,000 times, authorities may criminally accuse you of calúnia against your local authority.
Companies must limit the retention of personal data to what is reasonably necessary, proportionate to certain legitimate business purposes. This is “data minimization,” that is, the principle that any company should minimize its processing of consumer data. Minimization is an important tool in the data protection toolbox. This bill goes against that, favoring dangerous big data collection practices.Flaw 3: Banning Messaging Companies from Allowing Broadcast Groups, Even if Users Sign Up
Articles 9 and 11 require broadcast and discussion group sizes in private messaging tools to have a maximum membership limit (something that WhatsApp does today, but that not every communications tool necessarily does or will do), and that the ability to reach mass audiences via private messaging platforms must be strictly limited and controlled, even when those audiences opt in. The vision of the bill seems to be that mass discussion and mass broadcast are inherently dangerous and must only happen in public, and that no one should create forums or media for these interactions to happen in a truly private way, even with clear and explicit consent by the participants or recipients.
If an organization like an NGO, or a labor union, or a political party wanted to have a discussion forum among its membership or send its newsletter to all its members who’ve chosen to receive it through a similar tool as WhatsApp, Articles 9 and 11 will require that that content would have to be visible to and subject to the control of a platform operator—at least once some (unspecified) audience size limit was reached.Flaw 4: Forcing Social Media and Messaging Companies to Make Private User Logs Available Remotely
Article 37 compels large social networks and private messaging apps to appoint legal representatives in Brazil. It also forces those companies to provide remote access to their user databases and logs to their staff in Brazil so the local employees can be directly forced to turn them over.
This undermines user security and privacy. It increases the number of employees (and devices) that can access sensitive data and reduces the company's ability to control vulnerabilities and unauthorized access, not least because this is global in scale and, should it be adopted in Brazil, could be replicated by other countries. Each new person and each new device adds a new security risk.Flaw 5: No Limitations on Applying this Law to Users Outside of Brazil
Paragraphs 1 and 2 of Article 1 provide some jurisdictional exclusions, but all of these are applied at the company level—that is, a foreign company could be exempt if it is small (less than 2,000,000 users) or does not offer services to Brazil. None of these limitations, however, relate to the users’ nationality or location. Thus, the bill, by its terms, requires a company to create certain policies and procedures about content takedowns, mandatory identification of users, and other topics, which are not themselves in any way limited to people based in Brazil. Even if the intent is only to force the collection of ID documents from users who are based in Brazil, the bill neglects to say so.Addressing “Fake News” Without Undermining Human Rights
There are many innovative new responses being developed to help cut down on abuses of messaging and social media apps, both through policy responses and technical solutions. WhatsApp, for example, already limits the number of recipients of a single forwarded message at a time and shows users that messages were forwarded, viral messages are labeled with double arrows to indicate they did not originate from a close contact. However, shutting down bad actors cannot come at the expense of silencing millions of other users, invading their privacy, or undermining their security. To ensure that human rights are preserved, the Brazilian legislature must reject the current version of this bill. Moving forward, human rights such as privacy, expression, security must be baked into the law from the beginning.
For years, EFF has been monitoring a dangerous situation in Egypt: journalists, bloggers, and activists have been harassed, detained, arrested, and jailed, sometimes without trial, in increasing numbers by the Sisi regime. Since the COVID-19 pandemic began, these incidents have skyrocketed, affecting free expression both online and offline.
As we’ve said before, this crisis means it is more important than ever for individuals to be able to speak out and share information with one another online. Free expression and access to information are particularly critical under authoritarian rulers and governments that dismiss or distort scientific data. But at a time when true information about the pandemic may save lives, instead, the Egyptian government has expelled journalists from the country for their reporting on the pandemic, and arrested others on spurious charges for seeking information about prison conditions. Shortly after the coronavirus crisis began, a reporter for The Guardian was deported, while a reporter for the The New York Times was issued a warning.. Just last week the editor of Al Manassa, Nora Younis, was arrested on cybercrime charges (and later released). And the Committee to Protect Journalists reported today that at least four journalists arrested during the pandemic remain imprisoned.
Social media is also being monitored more closely than ever, with disastrous results: the Supreme Council for Media Regulation has banned the publishing of any data that contradicts the Ministry of Health’s official data. It has sent warning letters to news websites and social networks’ accounts it claims are sharing false news, and has arrested individuals for posting about the virus. Claiming national security interests, the far-reaching ban, which also limits the use of pseudonyms by journalists and criminalizes discussion of other “sensitive” topics, such as Libya, is being seen (rightfully) as censorship across the country. At a moment when obtaining true information is extremely important, the fact that Egypt’s government is increasing its attack on free expression is especially dangerous.
The government’s attacks on expression aren’t only damaging free speech online: rather than limiting the number of individuals in prison who are potentially exposed to the virus, Egyptian police have made matters worse, by harassing, beating, and even arresting protestors who are demanding the release of prisoners in dangerously overcrowded cells or simply ask for information on their arrested loved ones. Just last week, the family of Alaa Abd El Fattah, a leading Egyptian coder, blogger and activist who we’ve profiled in our Offline campaign, was attacked by police while protesting in front of Tora Prison. The next day, Alaa’s sister, Sanaa Seif, was forced into an unmarked car in front of the Prosecutor-General’s office as she arrived to submit a complaint regarding the assault and Alaa’s detention. She is now being held in pre-trial detention on charges of “broadcast[ing] fake news and rumors about the country’s deteriorating health conditions and the spread of the coronavirus in prisons” on Facebook, among others—according to police, for a fifteen day period, though there is no way to know for sure that it will end then.
All of these actions put the health and safety of the Egyptian population at risk. We join the international coalition of human rights and civil liberties organizations demanding both Alaa and Sanaa be released, and asking Egypt’s government to immediately halt its assault on free speech and free expression. We must lift up the voices of those who are being silenced to ensure the safety of everyone throughout the country.
Banner image CC-BY, by Molly Crabapple.
With the passage of last year's Copyright Directive, the EU demanded that member states pass laws that reduce copyright infringement by internet users while also requiring that they safeguard the fundamental rights of users (such as the right to free expression) and also the limitations to copyright. These safeguards must include protections for the new EU-wide exemption for commentary and criticism. Meanwhile states are also required to uphold the GDPR, which safeguards users against mass, indiscriminate surveillance, while somehow monitoring everything every user posts to decide whether it infringes copyright.
Serving these goals means that when EU member states turn the Directive into their national laws (the "transposition" process), their governments will have to decide to give more weight to some parts of the Directive, and that courts would have to figure out whether the resulting laws passed constitutional muster while satisfying the requirement of EU members to follow its rules.
The initial forays into transposition were catastrophic. First came France's disastrous proposal, which "balanced" copyright enforcement with Europeans' fundamental rights to fairness, free expression, and privacy by simply ignoring those public rights.
Now, the Dutch Parliament has landed in the same untenable legislative cul-de-sac as their French counterparts, proposing a Made-in-Holland version of the Copyright Directive that omits:
- Legally sufficient protections for users unjustly censored due to false accusations of copyright infringement;
- Legally sufficient protection for users whose work makes use of the mandatory, statutory exemptions for parody and criticism;
- A ban on "general monitoring"— that is, continuous, mass surveillance;
- Legally sufficient protection for "legitimate uses" of copyright works.
These are not optional elements of the Copyright Directive. These protections were enshrined in the Directive as part of the bargain meant to balance the fundamental rights of Europeans against the commercial interests of entertainment corporations. The Dutch Parliament's willingness to pay mere lip-service to these human rights-preserving measures as legislative inconveniences is a grim harbinger of other EU nations' pending lawmaking, and an indictment of the Dutch Parliament's commitment to human rights.
EFF was pleased to lead a coalition of libraries, human rights NGOs, and users' rights organizations in an open letter to the EU Commission asking them to monitor national implementations that respect human rights.
In April, we followed this letter with a note to the EC's Copyright Stakeholder Dialogue Team, setting out the impossibility of squaring the Copyright Directive with the GDPR's rules protecting Europeans from "general monitoring," and calling on them to direct member-states to create test suites that can evaluate whether companies' responses to their laws live up to their human rights obligations.
Today, we renew these and other demands, and we ask that Dutch Parliamentarians do their job in transposing the Copyright Directive, with the understanding that the provisions that protect Europeans' rights are not mere ornaments, and any law that fails to uphold those provisions is on a collision course with years of painful, costly litigation.
EFF Legal Intern Rachel Sommers contributed to this post.
When Google announced its intention to buy Fitbit in April, we had deep concerns. Google, a notoriously data-hungry company with a track record of reneging on its privacy policies, was about to buy one of the most successful wearables company in the world —after Google had repeatedly tried to launch a competing product, only to fail, over and over.
Fitbit users give their devices extraordinary access to their sensitive personal details, from their menstrual cycles to their alcohol consumption. In many cases, these "customers" didn't come to Fitbit willingly, but instead were coerced into giving the company their data in order to get the full benefit of their employer-provided health insurance.
Companies can grow by making things that people love, or they can grow by buying things that people love. One produces innovation, the other produces monopolies.
Last month, EFF put out a call for Fitbit owners' own thoughts about the merger, so that we could tell your story to the public and to the regulators who will have the final say over the merger. You obliged with a collection of thoughtful, insightful, and illuminating remarks that you generously permitted us to share. Here's a sampling from the collection:
From K.H.: “It makes me very uncomfortable to think of Google being able to track and store even more of my information. Especially the more sensitive, personal info that is collected on my Fitbit.”
From L.B.: “Despite the fact that I continue to use a Gmail account (sigh), I never intended for Google to own my fitness data and have been seeking an alternative fitness tracker ever since the merger was announced.”
From B.C.: “I just read your article about this and wanted to say that while I’ve owned and worn a Fitbit since the Charge (before the HR), I have been looking for an alternative since I read that Google was looking to acquire Fitbit. I really don’t want “targeted advertisements” based on my health data or my information being sold to the highest bidder.”
From T.F.: “I stopped confirming my period dates, drinks and weight loss on my fitbit since i read about the [Google] merger. Somehow, i would prefer not to become a statistic on [Google].”
From D.M.: “My family has used Fitbit products for years now and the idea of Google merging with them, in my opinion, is good and bad. Like everything in the tech industry, there are companies that hog all of the spotlight like Google. Google owns so many smaller companies and ideas that almost every productivity and shopping app on any mobile platform is in some way linked or owned by them. Fitbit has been doing just fine making their own trackers and products without any help from the tech giants, and that doesn’t need to stop now. I'm not against Google, but they have had a few security issues and their own phone line, the pixel, hasn't been doing that well anyway. I think Fitbit should stay a stand alone company and keep making great products.”
From A.S.: “A few years back, I bought a Fitbit explicitly because they were doing well but didn't seem to be on the verge of being acquired. I genuinely prefer using Android over iOS, and no longer want to take on the work of maintaining devices on third party OSes, so I wanted to be able to monitor steps without thinking it was all going to a central location.
Upon hearing about the merger, I found myself relieved I didn’t use the Fitbit for long (I found I got plenty of steps already and it was just a source of anxiety) so that the data can't be merged with my massive Google owned footprint.”
From L.O.: “A few years ago, I bought a Fitbit to track my progress against weight-loss goals that I had established. Moreover, I have a long-term cardiac condition that requires monitoring by a third-party (via an ICD). So I wanted to have access to medical data that I could collect for myself. I had the choice to buy either an Apple Watch, Samsung Gear, Google Fit gear, or a Fitbit. I chose to purchase a Fitbit for one simple reason: I wanted to have a fitness device that did not belong to an OEM and/or data scavenger. So I bought a very expensive Fitbit Charge 2. I was delighted by the purchase. I had a top-of-the-line fitness device. And I had confidence that my intimate and personal data would be secure; I knew that my personal and confidential data would not be used to either target me or to include me in a targeted group.
Now that Google has purchased Fitbit, I have few options left that will allow me to confidentially collect and store my personal (and private) fitness information. I don't trust Google with my data. They have repeatedly lied about data collection. So I have no confidence in their assertions that they will once again "protect" my data. I trust that their history of extravagant claims followed by adulterous actions will be repeated.
My fears concerning Google are well-founded. And as a result, I finally had to switch my email to an encrypted email from a neutral nation (i.e., Switzerland). And now, I have to spend even more money to protect myself from past purchases that are being hijacked by a nefarious content broker. Why should I have to spend even more money in order to ensure my privacy? My privacy is guaranteed by the United States Constitution, isn't it? And it in an inalienable right, isn't it? Since when can someone steal my right to privacy and transform it into their right to generate even more money? As a citizen, I demand that my right to privacy be recognized and defended by local, state, and federal governments. And in the meantime, I'm hoping that someone will create a truly private service for collecting and storing my personal medical information.”
From E.R.: “Around this time last year, I went to the Nest website. I am slowly making my condo a smart home with Alexa and I like making sure everything can connect to each other. I hopped on and was instantly asked to log in via Google. I was instantly filled with regret. I had my thermostat for just over a year and I knew that I hadn't done my research and the Google giant had one more point of data collection on me - plus it was connected to my light bulbs and Echo. Great.
Soon, I learn the Versa 2 is coming out - best part? It has ALEXA! I sign up right away—this is still safe. Sure. Amazon isn't that great at data secrets, but a heck of a lot better than Google connected apps. Then, I got the news of the merger. I told my boyfriend this would be the last FitBit I owned—but have been torn as it has been a motivating tool for me and a way to be in competition with my family now that we live in different states. But it would be yet another data point for Google, leaving me wondering when it will possibly end.
This may be odd coming from a Gmail account—but frankly, Gmail is the preferred UI for me. I tried to avoid Google search, but it proved futile when I just wasn't getting the same results. Google slowly has more and more of my life—from YouTube videos, to email, to home heating, and now fitness... when is enough enough?”
From J.R.: “My choice to buy a Fitbit device instead of using a GoogleFit related device/app is largely about avoiding giving google more data.
My choice to try Waze during its infancy was as much about its promise to the future as it was that it was not a Google Product and therefore google wouldn't have all of my families sensitive driving data.
Google paid a cheap 1 Billion to purchase all my data from Waze and then proceed to do nothing to improve the app. The app actually performs worse now on the same phone, sometimes taking 30 minutes to acquire GPS satellites that Google Maps (which i can't uninstall) see immediately.
Google now has all my historic driving data for years.... besides the fact that there is no real competitor to Waze and it does not seem like any company will ever try to compete with Google again on Maps and traffic data... why not continue using it? from my history, they can probably predict my future better than me.
The same with Fitbit... Now google will know every place I Run, Jog and walk.... not just where I park but exactly where i go.... is it not enough for them to know i went to the hospital but now they will know which floor (elevation), which wing (precise location data).... they will get into mapping hospitals and other areas.... they will know exactly where we are and what we are doing....
They will also sell our health data to various types of insurance companies, etc.
I believe Google should be broken up and not allowed to share data between the separate companies. I don't believe google should be able to buy out companies that harvest data as part of their mission. If google buys fitbit, i will certainly close the account, delete what I can from it and sell the fitbit (if it has value left)....”
While the overwhelming majority of comments sought to halt the merger, a few people wrote to us in support of it. Here's one of those comments.
From T.W.: “I'm really looking forward to the merger. I see the integration of Fitbit and Google Fit as a great bonus and hope to get far more insights than I get now. Hopefully the integration will progress really soon!”
If you're a Fitbit owner and you're alarmed by the thought of your data being handed to Google, we'd love to hear from you. Write to us at email@example.com, and please let us know:
- If we can publish your story (and, if so, whether you'd prefer to be anonymous);
- If we can share your story with government agencies;
- If we can share your email address with regulators looking for testimony.
San Francisco – The Electronic Frontier Foundation (EFF) is joining forces with the law firm of Durie Tangri to defend the Internet Archive against a lawsuit that threatens their Controlled Digital Lending (CDL) program, which helps people all over the world check out digital copies of books owned by the Archive and its partner libraries.
“Libraries protect, preserve, and make the world’s information accessible to everyone,” said Internet Archive Founder and Digital Librarian Brewster Kahle. “The publishers are suing to shut down a library and remove books from our digital shelves. This will have a chilling effect on a longstanding and widespread library practice of lending digitized books.”
The non-profit Internet Archive is a digital library, preserving and providing access to cultural artifacts of all kinds in electronic form. CDL allows people to check out digital copies of books for two weeks or less, and only permits patrons to check out as many copies as the Archive and its partner libraries physically own. That means that if the Archive and its partner libraries have only one copy of a book, then only one patron can borrow it at a time, just like any other library.
Four publishers sued the Archive earlier this month, alleging that CDL violates their copyrights. In their complaint, Hachette, HarperCollins, Wiley, and Penguin Random House claim CDL has cost their companies millions of dollars and is a threat to their businesses.
“EFF is proud to stand with the Archive and protect this important public service,” said EFF Legal Director Corynne McSherry. “Controlled digital lending helps get books to teachers, children and the general public at a time when that is more needed and more difficult than ever. It is no threat to any publisher’s bottom line.”
“Internet Archive is lending library books to one patron at a time,” said Durie Tangri partner Joe Gratz. “That’s what libraries have done for centuries, and we’re proud to represent Internet Archive in standing up for the rights of libraries in the digital age.”Contact: CorynneMcSherryLegal Directorcorynne@eff.org JoeGratzPartner at Durie Tangrijgratz@durietangri.com ChrisFreelandInternet Archivechrisfreeland@archive.org