Electronic Freedom Foundation

Live Blog: Senate Commerce Committee Discusses SESTA

EFF - Mon, 09/18/2017 - 8:27pm

There’s a bill in Congress that would be a disaster for free speech online. The Senate Committee on Commerce, Science, and Transportation is holding a hearing on that bill, and we’ll be blogging about it as it happens.

The Stop Enabling Sex Traffickers Act (SESTA) might sound virtuous, but it’s the wrong solution to a serious problem. The authors of SESTA say it’s designed to fight sex trafficking, but the bill wouldn’t punish traffickers. What it would do is threaten legitimate online speech.

Join us at 7:30 a.m. Pacific time (10:30 Eastern) on Tuesday, right here and on the @EFFLive Twitter account. We’ll let you know how to watch the hearing, and we’ll share our thoughts on it as it happens. In the meantime, please take a moment to tell your members of Congress to Stop SESTA.

Take Action

Tell Congress: Stop SESTA.

The Cybercrime Convention's New Protocol Needs to Uphold Human Rights

EFF - Mon, 09/18/2017 - 7:10pm

As part of an ongoing attempt to help law enforcement obtain data across international borders, the Council of Europe’s Cybercrime Convention— finalized in the weeks following 9/11, and ratified by the United States and over 50 countries around the world—is back on the global lawmaking agenda. This time, the Council’s Cybercrime Convention Committee (T-CY) has initiated a process to draft a second additional protocol to the Convention—a new text which could allow direct foreign law enforcement access to data stored in other countries’ territories. EFF has joined EDRi and a number of other organizations in a letter to the Council of Europe, highlighting some anticipated concerns with the upcoming process and seeking to ensure civil society concerns are considered in the new protocol. This new protocol needs to preserve the Council of Europe’s stated aim to uphold human rights, and not undermine privacy, and the integrity of our communication networks.

How the Long Arm of Law Reaches into Foreign Servers

Thanks to the internet, individuals and their data increasingly reside in different jurisdictions: your email might be stored on a Google server in the United States, while your shared Word documents might be stored by Microsoft in Ireland. Law enforcement agencies across the world have sought to gain access to this data, wherever it is held. That means police in one country frequently seek to extract personal, private data from servers in another.

Currently, the primary international mechanism for facilitating governmental cross border data access is the Mutual Legal Assistance Treaty (MLAT) process, a series of treaties between two or more states that create a formal basis for cooperation between designated authorities of signatories. These treaties typically include some safeguards for privacy and due process, most often the safeguards of the country that hosts the data.

The MLAT regime includes steps to protect privacy and due process, but frustrated agencies have increasingly sought to bypass it, by either cross-border hacking, or leaning on large service providers in foreign jurisdictions to hand over data voluntarily.

The legalities of cross-border hacking remain very murky, and its operation is the very opposite of transparent and proportionate. Meanwhile, voluntary cooperation between service providers and law enforcement occurs outside the MLAT process and without any clear accountability framework. The primary window of insight into its scope and operation is the annual Transparency Reports voluntarily issued by some companies such as Google and Twitter.

Hacking often blatantly ignores the laws and rights of a foreign state, but voluntary data handovers can be used to bypass domestic legal protections too.  In Canada, for example, the right to privacy includes rigorous safeguards for online anonymity: private Internet companies are not permitted to identify customers without prior judicial authorization. By identifying often sensitive anonymous online activity directly through the voluntary cooperation of a foreign company not bound by Canadian privacy law, law enforcement agents can effectively bypass this domestic privacy standard.

Faster, but not Better: Bypassing MLAT

The MLAT regime has been criticized as slow and inefficient. Law enforcement officers have claimed that have to wait anywhere between 6-10 months—the reported average time frame for receiving data through an MLAT request—for data necessary to their local investigation. Much of this delay, however, is attributable to a lack of adequate resources, streamlining and prioritization for the huge increase in MLAT requests for data held the United States, plus the absence of adequate training for law enforcement officers seeking to rely on another state’s legal search and seizure powers.

Instead of just working to make the MLAT process more effective, the T-CY committee is seeking to create a parallel mechanism for cross-border cooperation. While the process is still in its earliest stages, many are concerned that the resulting proposals will replicate many of the problems in the existing regime, while adding new ones.

What the New Protocol Might Contain

The Terms of Reference for the drafting of this new second protocol reveal some areas that may be included in the final proposal.

Simplified mechanisms for cross border access

T-CY has flagged a number of new mechanisms it believes will streamline cross-border data access. The terms of reference mention a simplified regime’ for legal assistance with respect to subscriber data. Such a regime could be highly controversial if it compelled companies to identify anonymous online activity without prior judicial authorization. The terms of reference also envision the creation of “international production orders.”. Presumably these would be orders issued by one court under its own standards, but that must be respected by Internet companies in other jurisdictions. Such mechanisms could be problematic where they do not respect the privacy and due process rights of both jurisdictions.

Direct cooperation

The terms of reference also call for "provisions allowing for direct cooperation with service providers in other jurisdictions with regard to requests for [i] subscriber information, [ii] preservation requests, and [iii] emergency requests." These mechanisms would be permissive, clearing the way for companies in one state to voluntarily cooperate with certain types of requests issued by another, and even in the absence of any form of judicial authorization.

Each of the proposed direct cooperation mechanisms could be problematic. Preservation requests are not controversial per se. Companies often have standard retention periods for different types of data sets. Preservation orders are intended to extend these so that law enforcement have sufficient time to obtain proper legal authorization to access the preserved data. However, preservation should not be undertaken frivolously. It can carry an accompanying stigma, and exposes affected individuals’ data to greater risk if a security breach occurs during the preservation period. This is why some jurisdictions require reasonable suspicion and court orders as requirements for preservation orders.

Direct voluntary cooperation on emergency matters is challenging as well. While in such instances, there is little time to engage the judicial apparatus and most states recognize direct access to private customer data in emergency situations, such access can still be subject to controversial overreach. This potential for overreach--and even abuse--becomes far higher where there is a disconnect between standards in requesting and responding jurisdictions.

Direct cooperation in identifying customers can be equally controversial. Anonymity is critical to privacy in digital contexts. Some data protection laws (such as Canada’s federal privacy law) prevent Internet companies from voluntarily providing subscriber data to law enforcement voluntarily.


The terms of reference also envisions the adoption of “safeguards". The scope and nature of these will be critical. Indeed, one of the strongest criticisms against the original Cybercrime Convention has been its lack of specific protections and safeguards for privacy and other human rights. The EDRi Letter calls for adherence to the Council of Europe’s data protection regime, Convention 108, as a minimum prerequisite to participation in the envisioned regime for cross-border access, which would provide some basis for shared privacy protection. The letter also calls for detailed statistical reporting and other safeguards.

What’s next?

On 18 September, the T-CY Bureau will meet with European Digital Rights Group (EDRI) to discuss the protocol. The first meeting of the Drafting Group will be held on 19 and 20 September. The draft Protocol will be prepared and finalized by the T-CY, in closed session.

Law enforcement agencies are granted extraordinary powers to invade privacy in order to investigate crime. This proposed second protocol to the Cybercrime Convention must ensure that the highest privacy standards and due process protections adopted by signatory states remain intact.

We believe that the Council of Europe T-CY Committee — Netherlands, Romania, Canada, Dominica Republic, Estonia, Mauritius, Norway, Portugal, Sri Lanka, Switzerland, and Ukraine — should concentrate first on fixes to the existing MLAT process, and they should ensure that this new initiative does not become an exercise in harmonization to the lowest denominator of international privacy protection. We'll be keeping track of what happens next.

EFF to Court: The First Amendment Protects the Right to Record First Responders

EFF - Mon, 09/18/2017 - 6:57pm

The First Amendment protects the right of members of the public to record first responders addressing medical emergencies, EFF argued in an amicus brief filed in the federal trial court for the Northern District of Texas. The case, Adelman v. DART, concerns the arrest of a Dallas freelance press photographer for criminal trespass after he took photos of a man receiving emergency treatment in a public area.

EFF’s amicus brief argues that people frequently use electronic devices to record and share photos and videos. This often includes newsworthy recordings of on-duty police officers and emergency medical services (EMS) personnel interacting with members of the public. These recordings have informed the public’s understanding of emergencies and first responder misconduct.

EFF’s brief was joined by a broad coalition of media organizations: the Freedom of the Press Foundation, the National Press Photographers Association, the PEN American Center, the Radio and Television Digital News Association, Reporters Without Borders, the Society of Professional Journalists, the Texas Association of Broadcasters, and the Texas Press Association.

Our local counsel are Thomas Leatherbury and March Fuller of Vinson & Elkins L.L.P.

EFF’s new brief builds on our amicus brief filed last year before the Third Circuit Court of Appeals in Fields v. Philadelphia. There, we successfully argued that the First Amendment protects the right to use electronic devices to record on-duty police officers.

Adelman, a freelance journalist, has provided photographs to media outlets for nearly 30 years. He heard a call for paramedics to respond to a K2 overdose victim at a Dallas Area Rapid Transit (“DART”) station. When he arrived, he believed the incident might be of public interest and began photographing the scene. A DART police officer demanded that Adelman stop taking photos. Despite Adelman’s assertion that he was well within his constitutional rights, the DART officer, with approval from her supervisor, arrested Adelman for criminal trespass.

Adelman sued the officer and DART. EFF’s amicus brief supports his motion for summary judgment.

Security Education: What's New on Surveillance Self-Defense

EFF - Mon, 09/18/2017 - 4:36pm

Since 2014, our digital security guide, Surveillance Self-Defense (SSD), has taught thousands of Internet users how to protect themselves from surveillance, with practical tutorials and advice on the best tools and expert-approved best practices. After hearing growing concerns among activists following the 2016 US presidential election, we pledged to build, update, and expand SSD and our other security education materials to better advise people, both within and outside the United States, on how to protect their online digital privacy and security.

While there’s still work to be done, here’s what we’ve been up to over the past several months.

SSD Guide Audit

SSD is consistently updated based on evolving technology, current events, and user feedback, but this year our SSD guides are going through a more in-depth technical and legal review to ensure they’re still relevant and up-to-date. We’ve also put our guides through a "simple English" review in order to make them more usable for digital security novices and veterans alike. We've worked to make them a little less jargon-filled, and more straightforward. That helps everyone, whether English is their first language or not. It also makes translation and localization easier: that's important for us, as SSD is maintained in eleven languages.

Many of these changes are based on reader feedback. We'd like to thank everyone for all the messages you've sent and encourage you to continue providing notes and suggestions, which helps us preserve SSD as a reliable resource for people all over the world. Please keep in mind that some feedback may take longer to incorporate than others, so if you've made a substantive suggestion, we may still be working on it!

As of today, we’ve updated the following guides and documents:

Assessing your Risks

Formerly known as "Threat Modeling," our Assessing your Risks guide was updated to be less intimidating to those new to digital security. Threat modeling is the primary and most important thing we teach at our security trainings, and because it’s such a fundamental skill, we wanted to ensure all users were able to grasp the concept. This guide walks users through how to conduct their own personal threat modeling assessment. We hope users and trainers will find it useful.

SSD Glossary Updates

SSD hosts a glossary of technical terms that users may encounter when using the security guide. We’ve added new terms and intend on expanding this resource over the coming months.

How to: Avoid Phishing Attacks

With new updates, this guide helps users identify phishing attacks when they encounter them and delves deeper into the types of phishing attacks that are out there. It also outlines five practical ways users can protect themselves against such attacks.

One new tip we added suggests using a password manager with autofill. Password managers that auto-fill passwords keep track of which sites those passwords belong to. While it’s easy for a human to be tricked by fake login pages, password managers are not tricked in the same way. Check out the guide for more details, and for other tips to help defend against phishing.

How to: Use Tor

We updated How to: Use Tor for Windows and How to: use Tor for macOS and added a new How to: use Tor for Linux guide to SSD. These guides all include new screenshots and step-by-step instructions for how to install and use the Tor Browser—perfect for people who might need occasional anonymity and privacy when accessing websites.

How to: Install Tor Messenger (beta) for macOS

We've added two new guides on installing and using Tor Messenger for instant communications.  In addition to going over the Tor network, which hides your location and can protect your anonymity, Tor Messenger ensures messages are sent strictly with Off-the-Record (OTR) encryption. This means your chats with friends will only be readable by them—not a third party or service provider.  Finally, we believe Tor Messenger is employing best practices in security where other XMPP messaging apps fall short.  We plan to add installation guides for Windows and Linux in the future.

Other guides we've updated include circumventing online censorship, and using two-factor authentication.

What’s coming up?

Continuation of our audit: This audit is ongoing, so stay tuned for more security guide updates over the coming months, as well as new additions to the SSD glossary.

Translations: As we continue to audit the guides, we’ll be updating our translated content. If you’re interested in volunteering as a translator, check out EFF’s Volunteer page.

Training materials: Nothing gratifies us more than hearing that someone used SSD to teach a friend or family member how to make stronger passwords, or how to encrypt their devices. While SSD was originally intended to be a self-teaching resource, we're working towards expanding the guide with resources for users to lead their friends and neighbors in healthy security practices. We’re working hard to ensure this is done in coordination with the powerful efforts of similar initiatives, and we seek to support, complement, and add to that collective body of knowledge and practice.

Thus we’ve interviewed dozens of US-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re also conducting frequent critical assessment of learners and trainers, with regular live-testing of our workshop content and user testing evaluations of the SSD website.

It’s been humbling to observe where beginners have difficulty learning concepts or tools, and to hear where trainers struggle using our materials. With their feedback fresh in mind, we continue to iterate on the materials and curriculum.

Over the next few months, we are rolling out new content for a teacher’s edition of SSD, intended for short awareness-raising one to four hour-long sessions. If you’re interested in testing our early draft digital security educational materials and providing feedback on how they worked, please fill out this form by September 30. We can’t wait to share them with you.


Azure Confidential Computing Heralds the Next Generation of Encryption in the Cloud

EFF - Mon, 09/18/2017 - 1:37pm

For years, EFF has commended companies who make cloud applications that encrypt data in transit. But soon, the new gold standard for cloud application encryption will be the cloud provider never having access to the user’s data—not even while performing computations on it.

Microsoft has become the first major cloud provider to offer developers the ability to build their applications on top of Intel’s Software Guard Extensions (SGX) technology, making Azure “the first SGX-capable servers in the public cloud.” Azure customers in Microsoft’s Early Access program can now begin to develop applications with the “confidential computing” technology.

Intel SGX uses protections baked into the hardware to ensure that data remains secure, even from the platform it’s running on. That means that an application that protects its secrets inside SGX is protecting it not just from other applications running on the system, but from the operating system, the hypervisor, and even Intel’s Management Engine, an extremely privileged coprocessor that we’ve previously warned about.

Cryptographic methods of computing on encrypted data are still an active body of research, with most methods still too inefficient or involving too much data leakage to see practical use in industry. Secure enclaves like SGX, also known as Trusted Execution Environments (TEEs), offer an alternative path to applications looking to compute over encrypted data. For example, a messaging service with a server that uses secure enclaves offers similar guarantees to end-to-end encrypted services. But whereas an end-to-encrypted messaging service would have to use client-side search or accept either side channel leakage or inefficiency to implement server-side search, by using an enclave they can provide server-side search functionality with always-encrypted guarantees at little additional computational cost. The same is true for the classic challenge of changing the key that a ciphertext is encrypted without access to the key, known as proxy re-encryption. Many problems that have challenged cryptographers for decades to find efficient, leakage-free solutions are solvable instead by a sufficiently robust secure enclave ecosystem.

While there is great potential here, SGX is still a relatively new technology, meaning that security vulnerabilities are still being discovered as more research is done. Memory corruption vulnerabilities within enclaves can be exploited by classic attack mechanisms like return-oriented programming (ROP). Various side channel attacks have been discovered, some of which are mitigated by a growing host of protective techniques. Promisingly, Microsoft’s press release teases that they’re “working with Intel and other hardware and software partners to develop additional TEEs and will support them as they become available.” This could indicate that they’re working on developing something like Sanctum, which isolates caches by trusted application, reducing a major side channel attack surface. Until these issues are fully addressed, a dedicated attacker could recover some or all of the data protected by SGX, but it’s still a massive improvement over not using hardware protection at all.

The technology underlying Azure Confidential Computing is not yet perfect, but it's efficient enough for practical usage, stops whole classes of attacks, and is available today. EFF applauds this giant step towards making encrypted applications in the cloud feasible, and we look forward to seeing cloud offerings from major providers like Amazon and Google follow suit. Secure enclaves have the potential to be a new frontier in offering users privacy in the cloud, and it will be exciting to see the applications that developers build now that this technology is becoming more widely available.

An open letter to the W3C Director, CEO, team and membership

EFF - Mon, 09/18/2017 - 12:50pm

Dear Jeff, Tim, and colleagues,

In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties.

When it became clear, following our formal objection, that the W3C's largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and other supporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action.

This covenant would allow the W3C's large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C's DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation.

More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video.

Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected.

The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate.  In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.

But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history.

In our campaigning on this issue, we have spoken to many, many members' representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn't on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool's errand.

We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.

So we'll keep fighting to fight to keep the web free and open. We'll keep suing the US government to overturn the laws that make DRM so toxic, and we'll keep bringing that fight to the world's legislatures that are being misled by the US Trade Representative to instigate local equivalents to America's legal mistakes.

We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them.

We will defend those who are put in harm's way for blowing the whistle on defects in EME implementations.

It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters.

Effective today, EFF is resigning from the W3C.

Thank you,

Cory Doctorow
Advisory Committee Representative to the W3C for the Electronic Frontier Foundation

California Legislature Sells Out Our Data to ISPs

EFF - Sat, 09/16/2017 - 10:10am

In the dead of night, the California Legislature shelved legislation that would have protected every Internet user in the state from having their data collected and sold by ISPs without their permission. By failing to pass A.B. 375, the legislature demonstrated that they put the profits of Verizon, AT&T, and Comcast over the privacy rights of their constituents.

Earlier this year, the Republican majority in Congress repealed the strong privacy rules issued by the Federal Communications Commission in 2016, which required ISPs to get affirmative consent before selling our data.  But while Congressional Democrats fought to protect our personal data, the Democratic-controlled California legislature did not follow suit. Instead, they kowtowed to an aggressive lobbying campaign, from telecommunications corporations and Internet companies, which included spurious claims and false social media advertisements about cybersecurity. 

“It is extremely disappointing that the California legislature failed to restore broadband privacy rights for residents in this state in response to the Trump Administration and Congressional efforts to roll back consumer protection,” EFF Legislative Counsel Ernesto Falcon said. “Californians will continue to be denied the legal right to say no to their cable or telephone company using their personal data for enhancing already high profits. Perhaps the legislature needs to spend more time talking to the 80% of voters that support the goal of A.B. 375 and less time with Comcast, AT&T, and Google's lobbyists in Sacramento.” 

All hope is not lost, because the bill is only stalled for the rest of the year. We can raise it again in 2018.

A.B. 375 was introduced late in the session; that it made it so far in the process so quickly demonstrates that there are many legislators who are all-in on privacy.  In January, EFF will build off this year's momentum with a renewed push to move A.B. 375 to the governor's desk. Mark your calendar and join us. 

In A Win For Privacy, Uber Restores User Control Over Location-Sharing

EFF - Fri, 09/15/2017 - 12:32pm

After making an unfortunate change to its privacy settings last year, we are glad to see that Uber has reverted back to settings that empower its users to make choices about sharing their location information.

Last December, an Uber update restricted users' location-sharing choices to "Always" or "Never," removing the more fine-grained "While Using" setting. This meant that, if someone wanted to use Uber, they had to agree to share their location information with the app at all times or surrender usability. In particular, this meant that riders would be tracked for five minutes after being dropped off.

Now, the "While Using" setting is back—and Uber says the post-ride tracking will end even for users who choose the "Always" setting. We are glad to see Uber reverting back to giving users more control over their location privacy, and hope it will stick this time. EFF recommends that all users manually check that their Uber location privacy setting is on "While Using"after they receive the update.

1.     Open the Uber app, and press the three horizontal lines on the top left to open the sidebar.

2.     Once the sidebar is open, press Settings.

3.     Scroll to the bottom of the settings page to select Privacy Settings.

4.     In your privacy settings, select Location.

5.     In Location, check to see if it says “Always.”  If it does, click to change it.

6.     Here, change your location setting to "While Using" or "Never". Note that "Never" will require you to manually enter your pickup address every time you call a ride.

One Last Chance for Police Transparency in California

EFF - Fri, 09/15/2017 - 11:52am

As the days wind down for the California legislature to pass bills, transparency advocates have seen landmark measures fall by the wayside. Without explanation, an Assembly committee shelved legislation that would have shined light on police use of surveillance technologies, including a requirement that police departments seek approval from their city councils. The legislature also gutted a key reform to the California Public Records Act (CPRA) that would’ve allowed courts to fine agencies that improperly thwart requests for government documents. 

But there is one last chance for California to improve the public’s right to access police records. S.B. 345 would require every law enforcement agency in the state to publish on its website all “current standards, policies, practices, operating procedures, and education and training materials” by January 1, 2019. The legislation would cover all materials that would be otherwise available through a CPRA request.

S.B. 345 is now on Gov. Jerry Brown's desk, and he should sign it immediately. 

Take Action

Tell Gov. Brown to sign S.B. 345 into law

There are two main reasons EFF is supporting this bill. 

The first is obvious: in order to hold law enforcement accountable, we need to understand the rules that officers are playing by. For privacy advocates, access to materials about advanced surveillance technologies—such as automated license plate readers, facial recognition, drones, and social media monitoring—will lead to better and more informed debates over policy.  The bill also would strengthen the greater police accountability movement, by proactively releasing policies and training about use of force, deaths in custody, body-worn cameras, and myriad other controversial police tactics and procedures.  

The second reason is more philosophical: we believe that rather than putting the onus on the public to always file formal records requests, government agencies should automatically upload their records to the Internet whenever possible. S.B. 345 creates openness by default for hundreds of agencies across the state.

To think of it another way: S.B. 345 is akin to the legislature sending its own public records request to every law enforcement agency in the state. 

Unlike other measures EFF has supported this session, S.B. 345 has not drawn strong opposition from law enforcement. In fact, only the California State Sheriffs’ Association is in opposition, arguing that the bill could require the disclosure of potentially sensitive information. This is incorrect, since the bill would only require agencies to publish records that would already be available under the CPRA.  The claim is further undercut by the fact that eight organizations representing law enforcement have come out in support of the bill, including the California Narcotics Officers Association and the Association of Deputy District Attorneys. 

The bill isn’t perfect. As written, the enforcement mechanism are vague, and it’s unclear what kind of consequences, if any, agencies may face if they fail to post these records in a little more than a year. In addition, agencies may overly withhold or redact policies, as is often the case with responses to traditional public records requests. Nevertheless, EFF believes that even the incremental measure contained in the bill will help pave the way for long term transparency reforms.

Join us in urging Gov. Jerry Brown to sign this important bill. 

We're Asking the Copyright Office to Protect Your Right To Remix, Study, and Tinker With Digital Devices and Media

EFF - Thu, 09/14/2017 - 2:39pm

Who controls your digital devices and media? If it's not you, why not? EFF has filed new petitions with the Copyright Office to give those in the United States protection against legal threats when you take control of your devices and media. We’re also seeking broader, better protection for security researchers and video creators against threats from Section 1201 of the Digital Millennium Copyright Act.

DMCA 1201 is a deeply flawed and unconstitutional law. It bans “circumvention” of access controls on copyrighted works, including software, and bans making or distributing tools that circumvent such digital locks. In effect, it lets hardware and software makers, along with major entertainment companies, control how your digital devices are allowed to function and how you can use digital media. It creates legal risks for security researchers, repair shops, artists, and technology users.

We’re fighting DMCA 1201 on many fronts, including a lawsuit to have the law struck down as unconstitutional. We’re also asking Congress to change the law. And every three years we petition the U.S. Copyright Office for temporary exemptions for some of the most important activities this law interferes with. This year, we’re asking the Copyright Office, along with the Librarian of Congress, to expand and simplify the exemptions they granted in 2015. We’re asking them to give legal protection to these activities:

  • Repair, diagnosis, and tinkering with any software-enabled device, including “Internet of Things” devices, appliances, computers, peripherals, toys, vehicle, and environmental automation systems;
  • Jailbreaking personal computing devices, including smartphones, tablets, smartwatches, and personal assistant devices like the Amazon Echo and the forthcoming Apple HomePod;
  • Using excerpts from video discs or streaming video for criticism or commentary, without the narrow limitations on users (noncommercial vidders, documentary filmmakers, certain students) that the Copyright Office now imposes;
  • Security research on software of all kinds, which can be found in consumer electronics, medical devices, vehicles, and more;
  • Lawful uses of video encrypted using High-bandwidth Digital Content Protection (HDCP, which is applied to content sent over the HDMI cables used by home video equipment).

Over the next few months, we’ll be presenting evidence to the Copyright Office to support these exemptions. We’ll also be supporting other exemptions, including one for vehicle maintenance and repair that was proposed by the Auto Care Association and the Consumer Technology Association. And we’ll be helping you, digital device users, tinkerers, and creators, make your voice heard in Washington DC on this issue.

Shrinking Transparency in the NAFTA and RCEP Negotiations

EFF - Thu, 09/14/2017 - 1:08pm

Provisions on digital trade are quietly being squared away in both of the two major trade negotiations currently underway—the North American Free Trade Agreement (NAFTA) renegotiation and the Regional Comprehensive Economic Partnership (RCEP) trade talks. But due to the worst-ever standards of transparency in both of these negotiations, we don’t know which provisions are on the table, which have been agreed, and which remain unresolved. The risk is that important and contentious digital issues—such as rules on copyright or software source code—might become bargaining chips in negotiation over broader economic issues including wages, manufacturing and dispute resolution, and that we would be none the wiser until after the deals have been done.

The danger of such bad compromises being made is especially acute because both of the deals are in trouble. Last month President Donald Trump targeted the NAFTA which includes Canada and Mexico, describing it in a tweet as "the worst trade deal ever made," which his administration "may have to terminate." At the conclusion of the 2nd round of talks held last week in Mexico, the prospects of agreement being concluded anytime soon seem unlikely. Even as a third round of talks is scheduled for Ottawa from September 23-27, 2017, concern about the agreement's future has prompted Mexico to step up efforts to boost commerce with Asia, South America and Europe.

The same is true of the RCEP agreement, which is being spearheaded by the 10 member ASEAN bloc and its 6 FTA partners, and which was expected to be concluded by the end of this year. The possibility of RCEP being ratified this year now seems unlikely as nations are far from agreement on key areas. Reports suggest however that the negotiators are targeting the e-commerce chapter as a priority area for early agreement. So far, the text specific to e-commerce has not been made publicly available and the leaked Terms of Reference for the Working Group on ecommerce (WGEC) is the only reference to what issues could make an appearance in the RCEP. We have previously reported that the e-commerce chapter was expected to be shorter and less detailed than the chapters on goods and services. However the secrecy of trade negotiations makes it very difficult to accurately track developments or policy objectives that are being pushed through or prioritized in the negotiations.

Trade Negotiations Are Becoming Less Open, Not More

Far from adopting the enhanced measures of transparency and openness that EFF demanded and that U.S. Trade Representative Lighthizer promised to deliver, the NAFTA renegotiation process seems to be walking back from the minimal level of transparency that civil society fought hard for during the Trans-Pacific Partnership Agreement (TPP) talks. So far, the NAFTA process has had no open stakeholder meetings at its rounds to date. EFF has written a joint letter to negotiators [PDF] that has been endorsed by groups including Access Now, Creative Commons, Derechos Digitales, and OpenMedia, demanding that it reinstate stakeholder meetings, as an initial step in opening up the negotiations to greater public scrutiny.

The openness of the RCEP negotiation process has also been degrading. At a public event held during the Auckland round, the Trade Minister from New Zealand and members of the Trade Negotiating Committee (TNC) fielded questions from stakeholders using social media and the event was live streamed. Organizers of earlier rounds of RCEP held in South Korea and Indonesia had facilitated formal and informal meetings between negotiators and civil society organisations (CSOs). But at recent rounds the opportunities for interaction between negotiators and stakeholders has dropped. The hosting nations have also been much more restrained with their engagement and outreach. For example, at the Hyderabad round there was no press conference or official statement released by the government representatives or chapter negotiators.

A Broader Retreat from Stakeholder Inclusion?

This worrying retreat from democracy in trade negotiations mirrors a broader softening of support by governments for public participation in policy development. From a high point about a decade ago, when governments embraced a so-called “multi-stakeholder model” as the foundation of bodies such as the Internet Governance Forum (IGF), several countries that were previous supporters of this model seem to be much cooler towards it now. Consider the Xiamen Declaration which was adopted by consensus at the 9th BRICS (Brazil, Russia, India, China, South Africa) summit in China this month. Unlike previous BRICS declarations which supported a multi-stakeholder approach, the Xiamen declaration stresses the importance of state sovereignty throughout the document.

This trend is not reserved to the BRICS bloc. Western governments, too, are excluding civil society voices from policy development, even while they experiment with methods for engaging directly with large corporations. In January this year, Denmark in recognition of technological issues becoming matters of foreign policy has appointed a "digitisation ambassador" to engage with tech companies such as Google and Facebook. This is a poor substitute for a fully inclusive, balanced and accountable process that would also include Internet users and other civil society stakeholders.

Given the complexity of trade negotiations and the fast-changing pace of the digital environment, negotiators are not always equipped to negotiate fair trade deals without the means of having a broader public discussion of the issues involved. In particular, including provisions related to the digital economy in trade agreements can result in a push to negotiate on issues before they can form an understanding of potential consequences. A wide and open consultative process, ensuring a more balanced view of the issues at stake, could help.

The intransigence of trade ministries such as the USTR to heed demands either from EFF or from Congress to become more open and transparent suggest that it may be a long while before we see such an inclusive, balanced, and accountable process evolving out of trade negotiations as they exist now. But other venues for discussing digital trade, such as the IGF and the OECD, do exist today and could be used rather than rushing into closed-door norm-setting. One advantage of preferring these more flexible, soft-law mechanisms for developing norms on Internet related issues is that they provide a venue for cooperation and policy coordination, without locking countries into a set of rules that may become outmoded as business models and technologies continue to evolve.

This is not the model that NAFTA or RCEP negotiators have chosen, preferring to open the door to corporate lobbyists while keeping civil society locked out. This week’s letter to the trade ministries of the United States, Canada, and Mexico calls them out on this and asks them to do better. If you are in the United States, you can also join the call for better transparency in trade negotiations by asking your representative to support the Promoting Transparency in Trade Act.

EFF Asks Court: Can Prosecutors Hide Behind Trade Secret Privilege to Convict You?

EFF - Thu, 09/14/2017 - 10:57am
California Appeals Court Urged to Allow Defense Review of DNA Matching Software

If a computer DNA matching program gives test results that implicate you in a crime, how do you know that the match is correct and not the result of a software bug? The Electronic Frontier Foundation (EFF) has urged a California appeals court to allow criminal defendants to review and evaluate the source code of forensic software programs used by the prosecution, in order to ensure that none of the wrong people end up behind bars, or worse, on death row.

In this case, a defendant was linked to a series of rapes by a DNA matching software program called TrueAllele. The defendant wants to examine how TrueAllele takes in a DNA sample and analyzes potential matches, as part of his challenge to the prosecution’s evidence. However, prosecutors and the manufacturers of TrueAllele’s software argue that the source code is a trade secret, and therefore should not be disclosed to anyone.

“Errors and bugs in DNA matching software are a known problem,” said EFF Staff Attorney Stephanie Lacambra. “At least two other programs have been found to have serious errors that could lead to false convictions. Additionally, different products used by different police departments can provide drastically different results. If you want to make sure the right person is imprisoned—and not running free while someone innocent is convicted—we can’t have software programs’ source code hidden away from stringent examination.”

The public has an overriding interest in ensuring the fair administration of justice, which favors public disclosure of evidence. However, in certain cases where public disclosure could be too financially damaging, the court could use a simple protective order so that only the defendant’s attorneys and experts are able to review the code. But even this level of secrecy should be the exception and not the rule.

“Software errors are extremely common across all kinds of products,” said EFF Staff Attorney Kit Walsh. “We can’t have someone’s legal fate determined by a black box, with no opportunity to see if it’s working correctly.”

For the full brief in California v. Johnson:

Contact:  Stephanie LacambraCriminal Defense Staff Attorneystephanie@eff.org KitWalshStaff Attorneykit@eff.org

With EFF’s Help, Small Business Stands Up To Patent Troll Electronic Communication Technologies, LLC

EFF - Wed, 09/13/2017 - 7:38pm

Since 1992, Fairytale Brownies has sold delicious brownies based on a secret family recipe. It’s a small business founded by two childhood friends who were quick to see the potential of the Internet and registered the domain www.brownies.com in 1995. Fairytale Brownies became an e-commerce website before the first dot-com boom and has remained in business ever since.

But earlier this year, Fairytale Brownies received a surprising letter. The letter said its e-commerce website infringes U.S. Patent No. 9,373,261 (“the ’261 patent”). The ’261 patent is owned by Electronic Communication Technologies, LLC (“ECT”), a company that was previously known as Eclipse IP. We have written about it many times before.

 What is the technology claimed by the patent?

Generally, ECT states that it patented “unique solutions to minimize hacker’s impacts when mimicking order confirmations and shipment notification emails.” From what we can tell, it claims to have invented sending an automated email in response to an online order, that contains personally identifiable information (“PII”). ECT claims that including PII allows customers to know that the email is not a “phishing” email or “part of an email fraud system,” and as a consequence customers will know to trust the links in the email.

There are a few more details about what exactly ECT claims to have invented and what it says infringes, which can be seen in the “chart” ECT provided to Fairytale Brownies. The chart points to claim 11 of the ’261 patent and describes how it believes Fairytale Brownies infringes the ’261 patent. In doing so, ECT tells Fairytale Brownies (and everyone else) what it thinks the ‘261 patent allows it to claim, and at least some subset of what it claims to have “invented.”

Fairytale Brownies disputes that ECT is properly reading its claims, and that Fairytale Brownies infringes. Regardless, as we point out in our letter on behalf of Fairytale Brownies, many, many companies engaged in the behavior ECT says it invented long before ECT ever applied for a patent. That patent, entitled “Secure notification messaging with user option to communicate with delivery or pickup representative,” issued in 2016, but ECT claims the technology was invented in 2003. But many other companies did exactly what Fairytale Brownies does, and we provided copies of order confirmation emails from Amazon.com that included the so-called “anti-phishing invention” from over two years before ECT’s “invention.” (We have redacted some information from those emails to protect privacy). We included examples from many other companies that show the same thing, including emails from NewEgg, Crate & Barrel, and Old Navy. Indeed, this “invention” seems practically mundane, and nothing that should have been seen as “novel” in 2003.

We also note in our letter that the ’261 patent is likely invalid under Alice. The Alice decision was the basis of a court ruling that invalidated claims from three of ECT’s other patents (this ruling issued when it was still known as Eclipse IP). We attach a motion that is currently pending in the Southern District of Florida that seeks a finding that claim 11 of the ’261 patent, the same claim that is asserted against Fairytale Brownies, is invalid for failing to claim patentable subject matter.

Electronic Communication Technologies demanded Fairytale Brownies pay $35,000 to license the ’261 patent.  We think that $1 would be too much. As a small business, getting a letter out of the blue demanding a payment for infringing on patents you’ve never heard of based on technology that forms a standard part of any e-commerce business can be a daunting experience. With EFF’s help, Fairytale Brownies is pushing back, and refusing to pay for something that was commonplace in e-commerce long before ECT claims to have invented it.

VICTORY: DOJ Backs Down from Facebook Gag Orders in Not-so-secret Investigation

EFF - Wed, 09/13/2017 - 6:26pm

The U.S. Department of Justice has come to the obvious conclusion that there’s no need to order Facebook to keep an investigation “secret” when it was never secret in the first place. While we applaud the government’s about-face, we question why they ever took such a ridiculous position in the first place.

Earlier this summer, Facebook brought a First Amendment challenge to gag orders accompanying several warrants in an investigation in Washington, D.C. that Facebook argued was “known to the public.” In an amicus brief joined by three other civil liberties organizations, EFF explained to the D.C. Court of Appeals that gag orders are subject to a stringent constitutional test that they can rarely meet. We noted that the timing and circumstances of the warrants were strikingly similar to the high-profile investigations of the protests surrounding President Trump’s inauguration on January 20 (known as J20). Given these facts, we argued that there was no way the First Amendment could allow gag orders preventing Facebook from informing its users that the government had obtained their data.

In a joint filing today, Facebook and the DOJ have told the court that the gag orders were no longer necessary, because the investigation had “progressed.” Of course, if the investigation in this case is about what we think it is—the January 20 protests in D.C., opposing the incoming Trump Administration—then it had “progressed” to the point where no gag orders were necessary even before the government applied for them.

While we’re pleased that the government has come to its senses in this case, it routinely uses gag orders that go far beyond the very narrow circumstances allowed by the First Amendment. We’ve fought these unconstitutional prior restraints for years, and we’ll continue to do so at every opportunity.

Read about what we had to say about the government’s original position here.

Stop SESTA: Whose Voices Will SESTA Silence?

EFF - Wed, 09/13/2017 - 4:59pm
Overreliance on Automated Filters Would Push Victims Off of the Internet

In all of the debate about the Stop Enabling Sex Traffickers Act (SESTA, S. 1693), there’s one question that’s received surprisingly little airplay: under SESTA, what would online platforms do in order to protect themselves from the increased liability for their users’ speech?

With the threat of overwhelming criminal and civil liability hanging over their heads, Internet platforms would likely turn to automated filtering of users’ speech in a big way. That’s bad news because when platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear.

Take Action

Tell Congress: Stop SESTA.

Section 230 Built Internet Communities

The modern Internet is a complex system of online intermediaries—web hosting providers, social media platforms, news websites that host comments—all of which we use to speak out and communicate with each other. Those platforms are all enabled by Section 230, a law that protects platforms from some types of liability for their users’ speech. Without those protections, most online intermediaries would not exist in their current form; the risk of liability would simply be too high.

Section 230 still allows authorities to prosecute platforms that break federal criminal law, but it keeps platforms from being punished for their customers’ actions in federal civil court or at the state level. This careful balance gives online platforms the freedom to set and enforce their own community standards while still allowing the government to hold platforms accountable for criminal behavior.

When platforms rely too heavily on automated filtering, it almost always results in some voices being silenced. And the most marginalized voices in society can be the first to disappear.

SESTA would throw off that balance by shifting additional liability to intermediaries. Many online communities would have little choice but to mitigate that risk by investing heavily in policing their members’ speech.

Or perhaps hire computers to police their members’ speech for them.

The Trouble with Bots

Massive cloud software company Oracle recently endorsed SESTA, but Oracle’s letter of support actually confirms one of the bill’s biggest problems—SESTA would effectively require Internet businesses to place more trust than ever before in automated filtering technologies to police their users’ activity.

While automated filtering technologies have certainly improved since Section 230 passed in 1996, Oracle implies that bots can now filter out sex traffickers’ activity with near-perfect accuracy without causing any collateral damage.

That’s simply not true. At best, automated filtering provides tools that can aid human moderators in finding content that may need further review. That review still requires human community managers. But many Internet companies (including most startups) would be unable to dedicate enough staff time to fully mitigate the risk of litigation under SESTA.

So what will websites do if they don’t have enough human reviewers to match their growing user bases? It’s likely that they’ll tune their automated filters to err on the side of extreme caution—which means silencing legitimate voices. To see how that would happen, look at the recent controversy over Google’s PerspectiveAPI, a tool designed to measure the “toxicity” in online discussions. PerspectiveAPI flags statements like “I am a gay woman” or “I am a black man” as toxic because it fails to differentiate between Internet users talking about themselves and making statements about marginalized groups. It even flagged “I am a Jew” as more toxic than “I don’t like Jews.”

See the problem? Now imagine a tool designed to filter out speech that advertises sex trafficking to comply with SESTA. From a technical perspective, creating such a tool that doesn’t also flag a victim of trafficking telling her story or trying to find help would be extremely difficult. (For that matter, so would training it to differentiate trafficking from consensual sex work.) If Google, the largest artificial intelligence (AI) company on the planet, can’t develop an algorithm that can reason about whether a simple statement is toxic, how likely is it that any company will be able to automatically and accurately detect sex trafficking advertisements?

Despite all the progress we’ve made in analytics and AI since 1996, machines still have an incredibly difficult time understanding subtlety and context when it comes to human speech. Filtering algorithms can’t yet understand things like the motivation behind a post—a huge factor in detecting the difference between a post that actually advertises sex trafficking and a post that criticizes sex trafficking and provides support to victims.

This is a classic example of the “nerd harder” problem, where policymakers believe that technology can advance to fit their specifications as soon as they pass a law requiring it to do so. They fail to recognize the inherent limits of automated filtering: bots are useful in some cases as an aid to human moderators, but they’ll never be appropriate as the unchecked gatekeeper to free expression. If we give them that position, then victims of sex trafficking may be the first people locked out.

At the same time, it’s also extremely unlikely that filtering systems will actually be able to stop determined sex traffickers from posting. That’s because it’s not currently technologically possible to create an automated filtering system that can’t be fooled by a human. For example, say you have a filter that just looks for certain keywords or phrases. Sex traffickers will learn what words or phrases trigger the filter and avoid them by using other words in their place.

New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators.

Building a more complicated filter—say, by using advanced machine learning or AI techniques—won’t solve the problem either. That’s because all complex machine learning systems are susceptible to what are known as “adversarial inputs”—examples of data that look normal to a human, but which completely fool AI-based classification systems. For example, an AI-based filtering system that recognizes sex trafficking posts might look at such a post and classify it correctly—unless the sex trafficker adds some random-looking-yet-carefully-chosen characters to the post (maybe even a block of carefully constructed incomprehensible text at the end), in which case the filtering system will classify the post as having nothing to do with sex trafficking.

If you’ve ever seen a spam email with a block of nonsense text at the bottom, then you’ve seen this tactic in action. Some spammers add blocks of text from books or articles to the bottom of their spam emails in order to fool spam filters into thinking the emails are legitimate. Research on solving this problem is ongoing, but slow. New developments in AI research will likely make filters a more effective aid to human review, but when freedom of expression is at stake, they’ll never supplant human moderators.

In other words, not only would automated filters be ineffective at removing sex trafficking ads from the Internet, they would also almost certainly end up silencing the very victims lawmakers are trying to help.

Don’t Put Machines in Charge of Free Speech

One irony of SESTA supporters’ praise for automated filtering is that Section 230 made algorithmic filtering possible. In 1995, a New York court ruled that because the online service Prodigy engaged in some editing of its members’ posts, it could be held liable as a “publisher” for the posts that it didn’t filter. When Reps. Christopher Cox and Ron Wyden introduced the Internet Freedom and Family Empowerment Act (the bill that would evolve into Section 230), they did it partially to remove that legal disincentive for online platforms to enforce community standards. Without Section 230, platforms would never have invested in improving filtering technologies.

However, automated filters simply cannot be trusted as the final arbiters of online speech. At best, they’re useful as an aid to human moderators, enforcing standards that are transparent to the user community. And the platforms using them must carefully balance enforcing standards with respecting users’ right to express themselves. Laws must protect that balance by shielding platforms from liability for their customers’ actions. Otherwise, marginalized voices can be the first ones pushed off the Internet.

Take Action

Tell Congress: Stop SESTA.

Three Lies Big Internet providers Are Spreading to Kill the California Broadband Privacy Bill

EFF - Wed, 09/13/2017 - 4:07pm

Now that California’s Broadband Privacy Bill, A.B. 375, is headed for a final vote in the California legislature, Comcast, Verizon, and all their allies are pulling out all the stops to try to convince state legislators to vote against the bill. Unfortunately, that includes telling legislators about made-up problems the bill will supposedly create, as well as tweeting out blatantly false statements and taking out online ads that spread lies about what A.B. 375 will really do. To set the record straight, here are three lies big Internet providers and their allies are spreading—and the truth about how A.B. 375 will protect your privacy and security online.



Lie #1: A.B. 375 Will Prevent Internet providers From Stopping Future Cyberattacks

In their opposition letter to legislators, big Internet providers and their allies claim that A.B. 375 “prevents Internet providers from using information they have long relied upon to prevent cybersecurity attacks.”

That’s a lie.

A.B. 375 explicitly says that Internet providers can use customer’s personal information (including things like IP addresses and traffic records) “to protect the rights or property of the BIAS provider, or to protect users of the BIAS and other BIAS providers from fraudulent, abusive, or unlawful use of the service.” In other words, A.B. 375 explicitly allows Internet providers to use the same information they’ve always used to detect intrusion attempts, stop cyber-attacks, and catch data breaches before they happen. And they can still work with other Internet providers to prevent attacks by sharing this vital security information, so long as they de-identify the data first by making sure it’s not linkable to an individual or device.

The truth: A.B. 375 will have no impact on what Internet providers can do to protect their customers’ security. If big Internet providers really think otherwise, we challenge them to publicly explain how—because so far all they’ve done is spread FUD.

Lie #2: A.B. 375 Will Lead to Pop-Ups(?!)

In their letter to legislators, big Internet providers also claim that A.B. 375 would “lead to recurring pop-ups to consumers.” We’ve seen the same claim about pop-ups in an online ad circulated by opponents of A.B. 375.

This claim is a lie too, and we have no idea how any rational person could read A.B. 375 and think “maybe that will mean more pop-ups.” The best we can come up with is that since A.B. 375 would require Internet providers to get your consent before sharing your data, maybe they think that if they constantly pester people with pop-ups, they’ll succeed in wearing people down until they give their consent. If that’s really what Comcast and Verizon are implying, then lawmakers should understand the claim for what it really is: a threat to hold consumers hostage in the fight for online privacy. As with Lie #1, if big Internet providers have a better explanation, we challenge them to provide it publicly.

As an aside, it’s worth nothing that if anything A.B. 375 will likely result in fewer pop-ups, not to mention fewer intrusive ads during your everyday browser experience. That’s because A.B. 375 will prevent Internet providers from using your data to sell ads they target to you without your consent—which means they’ll be less likely to insert ads into your web browsing, like some Internet providers have done in the past.

Lie #3: A.B. 375 Will Expose You to Hackers

Not only are opponents of A.B. 375 so desperate that they’re making stuff up (see Lie #2 above), they’re also trying to scare lawmakers into thinking that A.B. 375 will do the opposite of what it really does. In particular, they’re claiming that it will expose consumers to hackers. Of course, big Internet providers and their allies won’t explain how this would happen—even when we’ve asked them politely for a direct explanation.

Let’s set the record straight.

Contrary to the FUD Comcast, AT&T, Verizon, and their allies are spreading,  A.B. 375 will make it less likely that your information can be targeted by privacy thieves, and will make it harder for hackers to target you online.

As we explained back in March of 2017, before Congress killed the FCC’s privacy rules:

In order for Internet providers to make money off your browsing history, they first have to collect that information—what sort of websites you’re browsing, metadata about whom you’re talking to, and maybe even what search terms you’re using. Internet providers will also need to store that information somewhere, in order to build up a targeted advertising profile of you…

[But] Internet providers haven’t exactly been bastions of security when it comes to keeping information about their customers safe. Back in 2015, Comcast had to pay $33 million for unintentionally releasing information about customers who had paid Comcast to keep their phone numbers unlisted. “These customers ranged from domestic violence victims to law enforcement personnel”, many of whom had paid for their numbers to be unlisted to protect their safety. But Comcast screwed up, and their phone numbers were published anyway.

And that was just a mistake on Comcast’s part, with a simple piece of data like phone numbers, [which wasn’t even triggered by an outside attack]. Imagine what could happen if hackers decided to [actively] target the treasure trove of personal information Internet providers start collecting. People’s personal browsing history and records of their location could easily become the target of foreign hackers who want to embarrass or blackmail politicians or celebrities. To make matters worse, FCC Chairman (and former Verizon lawyer) Ajit Pai recently halted the enforcement of a rule that would require Internet providers to “take reasonable measures to protect customer [personal information] from unauthorized use, disclosure, or access”—so Internet providers won’t be on the hook if their lax security exposes your data.

With A.B. 375, the scenario described above is much less likely, because Internet providers won’t have as much incentive to collect your data in the first place. The logic is simple: no treasure trove of data, no target for hackers; no target for hackers, nothing for them to expose.

But the benefits of A.B. 375 go beyond reducing the risk of identity theft to consumers. A.B. 375 will also help reduce consumers’ exposure to dangerous cyber-attacks. That’s because many of the ways big Internet providers want to monetize your data have a side-effect of reducing your security online, including:

  • A standard called Explicit Trusted Proxies, proposed by Internet providers, which would allow your Internet provider to intercept your data, remove the encryption, read the data (and maybe even modify it), and then encrypt it again and send it on its way. The cybersecurity problem? According to a recent alert by US-CERT, an organization dedicated to computer security within the Department of Homeland Security, many of the systems designed to decrypt and then re-encrypt data actually end up weakening the security of the encryption, which exposes users to increased risk of cyber-attack. In fact, a recent study found that more than half of the connections that were intercepted (i.e. decrypted and re-encrypted) ended up with weaker encryption.
  • Inserting ads into your browsing. Here we’re talking about your Internet provider placing additional ads in the webpages you view (beyond the ones that were already placed there by the publisher). Why is this dangerous? Because inserting new code into a webpage in an automated fashion could break the security of the existing code in that page. As security expert Dan Kaminsky put it, inserting ads could break “all sorts of stuff, in that you no longer know as a website developer precisely what code is running in browsers out there. You didn't send it, but your customers received it.” In other words, security features in sites and apps you use could be broken and hackers could take advantage of that—causing you to do anything from send your username and password to them (while thinking it was going to the genuine website) to install malware on your computer.
  • Pre-installing spyware on your mobile phone. In the past, Internet providers have installed spyware like Carrier IQ on phones, claiming it was only to “improve wireless network and service performance.” So where’s the cybersecurity risk? As we’ve explained before, part of the problem with Carrier IQ was that it could be configured to record sensitive information into your phone’s system logs. But some apps transmit those logs off of your phone as part of standard debugging procedures, assuming there’s nothing sensitive in them. As a result, “keystrokes, text message content and other very sensitive information [was] in fact being transmitted from some phones on which Carrier IQ is installed to third parties.” Depending on how that information was transmitted, eavesdroppers could also intercept it—meaning hackers might be able to see your username or password, without having to do any real hacking.

The common thread in all three of these cybersecurity risks is that the strongest reason an Internet provider would have for introducing them is to make money by collecting your data, selling it, and using it to target ads at you. A.B. 375 would remove that motivation. If A.B. 375 passes, Internet providers won’t have any reason to weaken your security in order to collect your data or insert ads into your web browsing.

Privacy and security are two sides of the same coin, and when you strengthen one you strengthen the other. That’s why we need to do everything we can to make sure A.B. 375 passes the California legislature. Please, if you live in California, call your state legislator today and tell them not to believe the lies Comcast, AT&T, Verizon, and their allies are spreading. Tell them to support A.B. 375.



Data Protection Measure Removed from the California Values Act

EFF - Wed, 09/13/2017 - 2:56pm

Shortly after the November election, human rights groups joined California Senate President Pro Tem Kevin de León in introducing a comprehensive bill to protect data collected by the government from being used for mass deportations and religious registries. S.B. 54, known as the California Values Act, also included a sweeping measure requiring all state agencies to reevaluate their privacy policies and to only collect the minimum amount of data they need to offer services.

EFF was an early and strong supporter of the bill, generating more than 750 emails from our supporters. Over subsequent months, the bill was split into multiple pieces of legislation. The ban on using data to create religious registries was moved to S.B. 31, the California Religious Freedom Act, while the general protections morphed into a data privacy bill, S.B. 244, both authored by Sen. Ricardo Lara.

S.B. 54 became an important set of measures designed to deal exclusively with immigrant rights by limiting California law enforcement’s cooperation with U.S. Immigrations and Customs Enforcement and other immigration authorities. These provisions include stopping local law enforcement officials from inquiring about immigration status, keeping people in custody on immigrations “holds,” and using immigrations agents as interpreters—all measures that would help protect our immigrant family members, neighbors, coworkers, and friends from persecution. 

The bill originally would also have created a firewall between data collected by California law enforcement and federal immigration authorities. This piece was key to EFF’s support of the bill, but we’ve sadly seen it weakened over the course of the legislative session. First, law enforcement successfully pressured the author to write an exemption for the California Law Enforcement Telecommunications System (CLETS), potentially giving ICE instant access to large databases of information—including DMV records and RAP sheets. As we have reported, CLETS is frequently abused by law enforcement and the system has woefully insufficient oversight.

In the latest batch of amendments negotiated by Governor Jerry Brown’s office and de León, the remaining database restrictions were eliminated. A new section was added that would require the California Attorney General to develop “guidance” on ensuring that police limit the availability of their databases, "to the fullest extent practicable and consistent with federal and state law," for purposes of immigration enforcement. Such guidance might be valuable to state and local police agencies that want to avoid sharing their databases with immigration enforcers. But if the legislation passes, state and local police would merely be “encouraged” to adopt the guidance.

EFF long supported this bill because it contained a “database firewall.” But with that measure gone, the nexus with digital issues has largely evaporated. The optional compliance with guidance is not enough to warrant EFF’s continued support, and so we are moving to a neutral position on the legislation. This means we are closing our online email campaign. 

EFF does not oppose S.B. 54. In fact, our analysis of the bill identifies many ways in which the bill will protect the rights of the many thousands of immigrants living in our communities. A large coalition of human rights organizations continue to fight for its passage. If you would like to continue to support the current version of S.B. 54, you can visit the ICE Out of CA coalition website or one of the ACLU’s action pages

EFF continues to support S.B. 31 and S.B. 244, the spin-off bills on religious registries and data privacy, respectively. They would advance digital rights. 

EFF is disappointed that the digital rights provision of S.B. 54 were cut from the bill. By asserting a neutral position, we hope to encourage the legislature to enact the “database firewall” next session.


EFF, ACLU Sue Over Warrantless Phone, Laptop Searches at U.S. Border

EFF - Wed, 09/13/2017 - 12:51am
Lawsuit on Behalf of 11 Travelers Challenges Unconstitutional Searches of Electronic Devices

Teleconference With Plaintiffs and Attorneys Today at 12 p.m. ET/9 a.m. PT: Media Access Number: 866-564-2842, Code: 5147769

Boston, Massachusetts—The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) sued the Department of Homeland Security (DHS) today on behalf of 11 travelers whose smartphones and laptops were searched without warrants at the U.S. border.

The plaintiffs in the case are 10 U.S. citizens and one lawful permanent resident who hail from seven states and come from a variety of backgrounds. The lawsuit challenges the government’s fast-growing practice of searching travelers’ electronic devices without a warrant. It seeks to establish that the government must have a warrant based on probable cause to suspect a violation of immigration or customs laws before conducting such searches.

The plaintiffs include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. Several are Muslims or people of color. All were reentering the country from business or personal travel when border officers searched their devices. None were subsequently accused of any wrongdoing. Officers also confiscated and kept the devices of several plaintiffs for weeks or months—DHS has held one plaintiff’s device since January. EFF, ACLU, and the ACLU of Massachusetts are representing the 11 travelers.

“People now store their whole lives, including extremely sensitive personal and business matters, on their phones, tablets, and laptops, and it’s reasonable for them to carry these with them when they travel. It’s high time that the courts require the government to stop treating the border as a place where they can end-run the Constitution,” said EFF Staff Attorney Sophia Cope.

Plaintiff Diane Maye, a college professor and former U.S. Air Force officer, was detained for two hours at Miami International Airport when coming home from a vacation in Europe in June. “I felt humiliated and violated. I worried that border officers would read my email messages and texts, and look at my photos,” she said. “This was my life, and a border officer held it in the palm of his hand. I joined this lawsuit because I strongly believe the government shouldn’t have the unfettered power to invade your privacy.”

Plaintiff Sidd Bikkannavar, an engineer for NASA’s Jet Propulsion Laboratory in California, was detained at the Houston airport on the way home from vacation in Chile. A U.S. Customs and Border Protection (CPB) officer demanded that he reveal the password for his phone. The officer returned the phone a half-hour later, saying that it had been searched using “algorithms.”

Another plaintiff was subjected to violence. Akram Shibly, an independent filmmaker who lives in upstate New York, was crossing the U.S.-Canada border after a social outing in the Toronto area in January when a CBP officer ordered him to hand over his phone. CBP had just searched his phone three days earlier when he was returning from a work trip in Toronto, so Shibly declined. Officers then physically restrained him, with one choking him and another holding his legs, and took his phone from his pocket. They kept the phone, which was already unlocked, for over an hour before giving it back.

“I joined this lawsuit so other people don’t have to have to go through what happened to me,” Shibly said. “Border agents should not be able to coerce people into providing access to their phones, physically or otherwise.”

The number of electronic device searches at the border began increasing in 2016 and has grown even more under the Trump administration. CBP officers conducted nearly 15,000 electronic device searches in the first half of fiscal year 2017, putting CBP on track to conduct more than three times the number of searches than in fiscal year 2015 (8,503) and some 50 percent more than in fiscal year 2016 (19,033). 

“The government cannot use the border as a dragnet to search through our private data,” said ACLU attorney Esha Bhandari. “Our electronic devices contain massive amounts of information that can paint a detailed picture of our personal lives, including emails, texts, contact lists, photos, work documents, and medical or financial records. The Fourth Amendment requires that the government get a warrant before it can search the contents of smartphones and laptops at the border.”

Below is a full list of the plaintiffs:

 ·      Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student.

 ·      Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients.

 ·      Sidd Bikkannavar is an optical engineer for NASA’s Jet Propulsion Laboratory in California.

 ·      Jeremy Dupin is a journalist living in Boston.

 ·      Aaron Gach is an artist living in California.

 ·      Isma’il Kushkush is a journalist living in Virginia.

 ·      Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida.

·      Zainab Merchant, from Florida, is a writer and a graduate student at Harvard University.

·      Akram Shibly is a filmmaker living in New York.

·      Matthew Wright is a computer programmer in Colorado.

The case, Alasaad v. Duke, was filed in the U.S. District Court for the District of Massachusetts.

For the complaint:

For more on this case and plaintiff profiles:

For more on digital security at the border:

Tags: Border SearchesContact:  SophiaCopeStaff Attorneysophia@eff.org AdamSchwartzSenior Staff Attorneyadam@eff.org JoshBellACLU Media Strategistmedia@aclu.org

California Broadband Privacy Bill Heads for Final Vote This Friday

EFF - Tue, 09/12/2017 - 5:48pm

Huge news for broadband privacy! A California bill that would restore many of the privacy protections that Congress stripped earlier this year is headed for a final vote this Friday, 

The bill, A.B. 375, had languished in the Senate Rules Committee due to the efforts of AT&T, Comcast, and Verizon to deny a vote. But constituents called and emailed their representatives and reporters started asking questions. The overwhelming public support for privacy has so far counteracted the lobbying by telecommunications companies, which will spare no expense to keep the gift handed to them by Congress and the Trump administration.

The legislation is now one step away from landing on the governor’s desk, with final votes set for September 15, the last day of session.  If you live in California, you have 72 hours remaining to call your state Assemblymember and state Senator and tell them to vote AYE for A.B. 375 this Friday.

The Battle Behind the Scenes

Despite widespread public support, A.B. 375 has faced significant procedural hurdles behind the scenes in Sacramento, in part because the bill was introduced late in the legislation cycle in response to the Congressional vote. The bill emerged victorious from two Senate committees with strong votes in July before getting stuck in the Senate Rules Committee in the weeks that followed. Recognizing that they will not beat this bill by the votes, the ISP industry players that opposed the bill (note, many ISPs in California support AB 375), opted to run out the clock.

On Tuesday, however, the Senate Rules Committee Chair and Senate Leader Kevin de Leon decided to move the bill for a vote, and his committee approved discharging the bill for the Senate floor. Following that step, the California Senate voted 25 to 13 to make it eligible for a final vote. These procedural steps are necessary under California law because bills must be in public display for 72 hours as required by Proposition 54 that was approved by voters in 2016.

The bill’s final version continues to mirror the now repealed FCC broadband privacy rule and the bill as it stands now would effectively return power to California consumers over their personal data that their ISP obtains from the broadband service. That means your browser history, the applications you use, your location when you use the Internet are firmly controlled by you. If A.B. 375 is enacted, ISPs must have your permission first before they can resell or use that data with third parties beyond providing broadband access services.

Furthermore, the bill expands consumer protection beyond the original FCC rule by banning pay for privacy practices such as AT&T’s effort to charge people $30 more for broadband if they did not surrender their private information. While AT&T dropped its plan once the FCC began updating the privacy rules in 2015, their successful lobbying campaign to repeal the rule had cleared the way for them to revisit plans to roll it back out. If A.B. 375 is law though residents in this state will never face what was essentially a privacy tax.

The stage is set for a final vote this Friday, the last day of session for the California legislature so that it can be sent to the Governor’s office this year.  Speak up now to demand that the legislature puts people’s privacy over ISP profits.

Take Action

Tell your representatives to support online privacy.

With iOS 11, More Options to Disable Touch ID Means Better Security

EFF - Tue, 09/12/2017 - 3:21pm

When iOS 11 is released to the public next week, it will bring a new feature with big benefits for user security. Last month, some vigilant Twitter users using the iOS 11 public beta discovered a new way to quickly disable Touch ID by just tapping the power button five times. This is good news for users, particularly those who may be in unpredictable situations with physical security concerns that change over time.

The newly uncovered feature is simple. Tapping an iPhone power button rapidly five times will bring up an option to dial 9-1-1. After a user reaches that emergency services screen, Touch ID is temporarily disabled until they enter a passcode. In other words, you can call emergency services without unlocking the phone—but then your fingerprint can’t unlock it.

This is a big improvement on previously known and relatively clunky methods for disabling Touch ID, including restarting the phone, swiping a different finger five times to force a lock-out, or navigating through settings to disable it manually.

Not About Law Enforcement

While there is some speculation that this feature is intended to defeat law enforcement—with some going as far as to call it a “cop button”—it is, at its core, a common-sense security feature. The option to disable Touch ID quickly and inconspicuously is helpful for any user who needs more choices and flexibility in their physical security. Think about all the situations where a user might be worried that someone will unexpectedly force them to unlock their phone: a mugging, domestic abuse from a partner or parent, physical harassment or stalking, bullying. Even the fact that the feature is activated alongside an option to quickly call 9-1-1 links it to a whole range of emergency situations in which law enforcement is not already present.

Constitutional Bonus

Even though this new feature is not aimed at law enforcement, it brings a potentially unintended side effect: now when you used Touch ID, you’re not giving up your Fifth Amendment rights. The Fifth Amendment of course gives us the right to remain silent in interactions with the government. In legalese, we say that it provides us a right to be free from compelled self-incrimination. EFF has long argued that the Fifth Amendment protects us from having to turn over our passwords. But the government, and a number of digital law scholars such as EFF Special Counsel Marcia Hofmann, have suggested that our fingerprints may not have such protection, and some courts have agreed. With today’s announcement, we no longer have to choose between maintaining our Fifth Amendment right to refuse to unlock our phones and the convenience of Touch ID.

We call on other manufacturers to follow Apple’s lead and implement this kind of design in their own devices.