The Federal Emergency Management Agency will this week test a new “presidential alert” system that will allow the president to send a message to every phone in the US.
The alert is the first nationwide test of the presidential alert test, FEMA said in an advisory, which allows the president to address the nation in the event of a national emergency.
Using the Wireless Emergency Alert (WEA) system, anyone with cell service should receive the message to their phone.
“THIS IS A TEST of the National Wireless Emergency Alert System. No action is needed,” the message will read, due to be sent out on Thursday at 2:18pm ET.
Minutes later, the Emergency Alert System (EAS) will broadcast a similar test message over television, radio, and wireline video services.
Emergency alerts aren’t new and warning systems have long been used — and tested — in the US to alert citizens of local and state incidents, like AMBER alerts for missing children and severe weather events that may result in danger to or loss of life.
But presidential alerts have yet to be tested. Unlike other alerts, citizens will not be allowed to opt out of presidential alerts.
Allowing the president to send nationwide alerts was included in the passing of the WARN Act in 2006 under the Bush administration, creating a state-of-the-art emergency alert system that would replace an aging infrastructure. As alarming as these alerts can (and are designed to) be, the system aims to modernize the alerts system for a population increasingly moving away from televisions and towards mobile technology.
These presidential alerts are solely at the discretion of the president and can be sent for any reason, but experts have shown little concern that the system may be abused.
But the system isn’t perfect. Earlier this year, panic spread on Hawaii after an erroneous alert went out to residents warning of a “ballistic missile thread inbound.” The message said, “this is not a drill.” The false warning was amid the height of tensions between the US and North Korea, which at the time was regularly testing its ballistic missiles as part of its nuclear weapons program.
More than 100 carriers will participate in the test, FEMA said.
Fake social media profiles are useful for more than just sowing political discord among foreign adversaries, as it turns out. A group linked to the North Korean government has been able to duck existing sanctions on the country by concealing its true identity and developing software for clients abroad.
This week, the US Treasury issued sanctions against two tech companies accused of running cash-generating front operations for North Korea: Yanbian Silverstar Network Technology or “China Silver Star,” based near Shenyang, China, and a Russian sister company called Volasys Silver Star. The Treasury also sanctioned China Silver Star’s North Korean CEO Jong Song Hwa.
“These actions are intended to stop the flow of illicit revenue to North Korea from overseas information technology workers disguising their true identities and hiding behind front companies, aliases, and third-party nationals,” Treasury Secretary Steven Mnuchin said of the sanctions.
As the Wall Street Journal reported in a follow-up story, North Korean operatives advertised with Facebook and LinkedIn profiles, solicited business with Freelance.com and Upwork, crafted software using Github, communicated over Slack and accepted compensation with Paypal. The country appears to be encountering little resistance putting tech platforms built by US companies to work building software including “mobile games, apps, [and] bots” for unwitting clients abroad.
The US Treasury issued its first warnings of secret North Korean software development scheme in July, though did not provide many details at the time. The Wall Street Journal was able to identify “tens of thousands” of dollars stemming from the Chinese front company, though that’s only a representative sample. The company worked as a middleman, contracting its work out to software developers around the globe and then denying payment for their services.
Facebook suspended many suspicious accounts linked to the scheme after they were identified by the Wall Street Journal, including one for “Everyday-Dude.com”:
“A Facebook page for Everyday-Dude.com, showing packages with hundreds of programs, was taken down minutes later as a reporter was viewing it. Pages of some of the account’s more than 1,000 Facebook friends also subsequently disappeared…
“[Facebook] suspended numerous North Korea-linked accounts identified by the Journal, including one that Facebook said appeared not to belong to a real person. After it closed that account, another profile, with identical friends and photos, soon popped up.”
Linkedin and Upwork similarly removed accounts linked to the North Korean operations.
Beyond the consequences for international relations, software surreptitiously sold by the North Korean government poses considerable security risks. According to the Treasury, the North Korean government makes money off of a “range of IT services and products abroad” including “website and app development, security software, and biometric identification software that have military and law enforcement applications.” For companies unwittingly buying North Korea-made software, the potential for malware that could give the isolated nation eyes and ears beyond its borders is high, particularly given that the country has already demonstrated its offensive cyber capabilities.
Between that and sanctions against doing business with the country, Mnuchin urges the information technology industry and other businesses to exercise awareness of the ongoing scheme to avoid accidentally contracting with North Korea on tech-related projects.
Bon anniversaire, Let’s Encrypt!
The free-to-use nonprofit was founded in 2014 in part by the Electronic Frontier Foundation and is backed by Akamai, Google, Facebook, Mozilla and more. Three years ago Friday, it issued its first certificate.
Since then, the numbers have exploded. To date, more than 380 million certificates have been issued on 129 million unique domains. That also makes it the largest certificate issuer in the world, by far.
Now, 75 percent of all Firefox traffic is HTTPS, according to public Firefox data — in part thanks to Let’s Encrypt. That’s a massive increase from when it was founded, where only 38 percent of website page loads were served over an HTTPS encrypted connection.
“Change at that speed and scale is incredible,” a spokesperson told TechCrunch. “Let’s Encrypt isn’t solely responsible for this change, but we certainly catalyzed it.”
HTTPS is what keeps the pipes of the web secure. Every time your browser lights up in green or flashes a padlock, it’s a TLS certificate encrypting the connection between your computer and the website, ensuring nobody can intercept and steal your data or modify the website.
But for years, the certificate market was broken, expensive and difficult to navigate. In an effort to “encrypt the web,” the EFF and others banded together to bring free TLS certificates to the masses.
That means bloggers, single-page websites and startups alike can get an easy-to-install certificate for free — even news sites like TechCrunch rely on Let’s Encrypt for a secure connection. Security experts and encryption advocates Scott Helme and Troy Hunt last month found that more than half of the top million websites by traffic are on HTTPS.
And as it’s grown, the certificate issuer has become trusted by the major players — including Apple, Google, Microsoft, Oracle and more.
A fully encrypted web is still a ways off. But with close to a million Let’s Encrypt certificates issued each day, it looks more within reach than ever.
The UK government says that access to satellites and space surveillance programs will suffer in the event of a “no deal” departure from the European Union .
Britain has less than six months to go before the country leaves the 28 member state bloc, after a little over half the country voted to withdraw membership from the European Union in a 2016 referendum. So far, the Brexit process has been a hot mess of political infighting and uncertainty, bureaucracy and backstabbing — amid threats of coups and leadership challenges. And the government isn’t even close to scoring a deal to keep trade ties open, immigration flowing, and airplanes taking off.
Now, the government has further said that services reliant on EU membership — like access to space programs — will be affected.
The reassuring news is that car and phone GPS maps won’t suddenly stop working.
But the government said that the UK will “no longer play any part” of the European’s GPS efforts, shutting out businesses, academics and researchers who will be shut out of future contracts, and “may face difficulty carrying out and completing existing contracts.”
“There should be no noticeable impact if the UK were to leave the EU with no agreement in place,” but the UK is investing £92 million ($120m) to fund its own UK-based GPS system. The notice also said that the UK’s military and intelligence agencies will no longer have access to the EU’s Public Regulated Service, a hardened GPS system that enhances protections against spoofing and jamming. But that system isn’t expected to go into place until 2020, so the government isn’t immediately concerned.
The UK will also no longer be part of the Copernicus program, a EU-based earth observation initiative that’s a critical asset to national security as it contributes to maritime surveillance, border control and understanding climate change. Although the program’s data is free and open, the UK government says that users will no longer have high-bandwidth access to data from the satellites and additional data, but admits that it’s “seeking to clarify” the terms.
Although this is the “worst case scenario” in case of no final agreement on the divorce settlement from Europe, with just months to go and a distance to reach, it’s looking like a “no deal” is increasingly likely.
It’s been over a year since highly classified exploits built by the National Security Agency were stolen and published online.
One of the tools, dubbed EternalBlue, can covertly break into almost any Windows machine around the world. It didn’t take long for hackers to start using the exploits to run ransomware on thousands of computers, grinding hospitals and businesses to a halt. Two separate attacks in as many months used WannaCry and NotPetya ransomware, which spread like wildfire. Once a single computer in a network was infected, the malware would also target other devices on the network. The recovery was slow and cost companies hundreds of millions in damages.
Although WannaCry infections have slowed, hackers are still using the publicly accessible NSA exploits to infect computers to mine cryptocurrency.
Nobody knows that better than one major Fortune 500 multinational, which was hit by a massive WannaMine cryptocurrency mining infection just days ago.
“Our customer is a very large corporation with multiple offices around the world,” said Amit Serper, who heads the security research team at Boston-based Cybereason.
“Once their first machine was hit the malware propagated to more than 1,000 machines in a day,” he said, without naming the company.
Cryptomining attacks have been around for a while. It’s more common for hackers to inject cryptocurrency mining code into vulnerable websites, but the payoffs are low. Some news sites are now installing their own mining code as an alternative to running ads.
But WannaMine works differently, Cybereason said in its post-mortem of the infection. By using those leaked NSA exploits to gain a single foothold into a network, the malware tries to infect any computer within. It’s persistent so the malware can survive a reboot. After it’s implanted, the malware uses the computer’s processor to mine cryptocurrency. On dozens, hundreds, or even thousands of computers, the malware can mine cryptocurrency far faster and more efficiently. Though it’s a drain on energy and computer resources, it can often go unnoticed.
After the malware spreads within the network, it modifies the power management settings to prevent the infected computer from going to sleep. Not only that, the malware tries to detect other cryptomining scripts running on the computer and terminates them — likely to squeeze every bit of energy out of the processor, maximizing its mining effort.
Based on up-to-date statistics from Shodan, a search engine for open ports and databases, at least 919,000 servers are still vulnerable to EternalBlue, with some 300,000 machines in the US alone. And that’s just the tip of the iceberg — that figure can represent either individual vulnerable computers or a vulnerable network server capable of infecting hundreds or thousands more machines.
Cybereason said companies are still severely impacted because their systems aren’t protected.
“There’s no reason why these exploits should remain unpatched,” the blog post said. “Organizations need to install security patches and update machines.”
If not ransomware yesterday, it’s cryptomining malware today. Given how versatile the EternalBlue exploit is, tomorrow it could be something far worse — like data theft or destruction.
In other words: if you haven’t patched already, what are you waiting for?
What once sounded like science fiction is now a reality: creating almost-perfectly faked videos of people saying things they never did.
Surprise: now they’re a reality, thanks to modern computing power and the power to instantly share it on the world’s social stage.
But US lawmakers are worried that these faked videos could be used by the enemy to harm national security.
If you’re unaware, “deep fakes” are digitally manipulated videos — which, using existing footage mixed with artificial intelligence and machine learning, can be made to look like, or close to, the real thing.
Unsurprisingly, one of the first uses of deep fake videos was for porn — by superimposing faces onto others.
But now, lawmakers think that deep fakes could be used as part of wider disinformation campaigns — known to be a tactic of adversarial nation states like Russia — in an effort to sway elections or spread false news.
“Deep fakes could become a potent tool for hostile powers seeking to spread misinformation,” said Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, in a letter to Dan Coats, director of national intelligence.
“As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States,” said the letter, co-signed by Reps. Stephanie Murphy (D-FL) and Carlos Curbelo (R-FL).
The lawmakers have a point. In recent years, disinformation has risen and has a major factor in the meddling during the 2016 presidential election. Now, instead of false and misleading news, it was fake videos of politicians throwing shade at their rivals — or worse.
Take this deep fake video — created by BuzzFeed News of what appears to be former President Obama calling President Trump a “dipshit” — to show how easy it is.
What might be good fun on one hand, on the other it can have a major effect on those who are none the wiser.
Schiff, Murphy, and Curbelo want the director of national intelligence — who oversees the nation’s intelligence community — to report back on its assessment of how deep fake technology could harm national security interests, and if there are countermeasures to protect against foreign influence — and their limitations.
The DNI’s office was asked to report back to Congress by mid-December.
When reached, a spokesperson for the DNI did not immediately comment. If that changes, we’ll update.
Ant Financial has denied claims that it covertly raided Equifax — the U.S. credit firm that was hit by a hack last year — to grab information, including code, confidential data and documents to help recruit staff for its own credit scoring service.
The Alibaba affiliate, which is valued at over $100 billion, launched Sesame Credit in China in 2015, and a report this week from The Wall Street Journal suggests that it leaned heavily on Equifax to do so. Ant Financial hired China-born Canadian David Zou from Equifax and the Journal claims that Zou looked up employee information to gauge potential hires and squirreled away confidential documents via his personal email account.
Ant was said to have offered Chinese staff at Equifax lucrative raises — reportedly tripling their salaries — with a focus on those who “provided instructions on specific Equifax information… if they jumped ship.” Apparently, however, only Zou did.
Zou, for this part, denies the claims. He said he looked up Equifax team members to help with work on his project in Canada, and forward information to his email account in order to continue his work when he went home.
Ant Financial went a step further with its own denial — from the firm’s statement:
Ant Financial did not use Equifax intellectual property or trade secrets, including code, algorithms or methodology in the development of our credit rating product. Ant Financial has found absolutely no evidence of Equifax software, data or code having been transferred to our systems.
We did not directly or indirectly encourage potential job applicants to obtain Equifax intellectual property or trade secrets. This would be a violation of Ant Financial’s Code of Business Conduct and we would take immediate action against any employee found engaging in this behavior. Further, we have specific agreements with our third-party recruiters that prohibit them from violating intellectual property rights of any parties. If any recruiter is found to have conducted such activities, we will stop accepting candidate referrals from them and may take legal action against them.
Ant said the Journal’s report is “full of innuendo based on disjointed facts and coincidence in timing.”
Beyond Ant, the report claims Equifax firm was also concerned when an unnamed Chinese firm swapped members of its delegation in the run-up to a meeting, a tactic that is apparently common among potential cases of espionage.
The company had been in contact with the FBI, but ultimately Equifax decided against pushing the matter. The Journal’s report also suggested that federal investigators backed down because they sensed that Equifax didn’t believe it had information that Chinese spies would be keen to get hold of. In addition, it hadn’t lost consumer information. Ultimately, of course, that leaked out when the firm was hacked last year.
“The story not only promotes hostility against a specific company, but also paints an overall narrative that maligns Chinese companies as a whole, and further promotes culturally divisive perceptions of ethnic Chinese people in America,” Ant said in its statement, which is attributed to the company’s general counsel, Leiming Chen.
Most modern computers, even devices with disk encryption, are vulnerable to a new attack that can steal sensitive data in a matter of minutes, new research says.
In new findings published Wednesday, F-Secure said that none of the existing firmware security measures in every laptop it tested “does a good enough job” of preventing data theft.
F-Secure principal security consultant Olle Segerdahl told TechCrunch that the vulnerabilities put “nearly all” laptops and desktops — both Windows and Mac users — at risk.
The new exploit is built on the foundations of a traditional cold boot attack, which hackers have long used to steal data from a shut-down computer. Modern computers overwrite their memory when a device is powered down to scramble the data from being read. But Segerdahl and his colleague Pasi Saarinen found a way to disable the overwriting process, making a cold boot attack possible again.
“It takes some extra steps,” said Segerdahl, but the flaw is “easy to exploit.” So much so, he said, that it would “very much surprise” him if this technique isn’t already known by some hacker groups.
“We are convinced that anybody tasked with stealing data off laptops would have already come to the same conclusions as us,” he said.
It’s no secret that if you have physical access to a computer, the chances of someone stealing your data is usually greater. That’s why so many use disk encryption — like BitLocker for Windows and FileVault for Macs — to scramble and protect data when a device is turned off.
But the researchers found that in nearly all cases they can still steal data protected by BitLocker and FileVault regardless.
After the researchers figured out how the memory overwriting process works, they said it took just a few hours to build a proof-of-concept tool that prevented the firmware from clearing secrets from memory. From there, the researchers scanned for disk encryption keys, which, when obtained, could be used to mount the protected volume.
It’s not just disk encryption keys at risk, Segerdahl said. A successful attacker can steal “anything that happens to be in memory,” like passwords and corporate network credentials, which can lead to a deeper compromise.
Their findings were shared with Microsoft, Apple, and Intel prior to release. According to the researchers, only a smattering of devices aren’t affected by the attack. Microsoft said in a recently updated article on BitLocker countermeasures that using a startup PIN can mitigate cold boot attacks, but Windows users with “Home” licenses are out of luck. And, any Apple Mac equipped with a T2 chip are not affected, but a firmware password would still improve protection.
Both Microsoft and Apple downplayed the risk.
Acknowledging that an attacker needs physical access to a device, Microsoft said it encourages customers to “practice good security habits, including preventing unauthorized physical access to their device.” Apple said it was looking into measures to protect Macs that don’t come with the T2 chip.
When reached, Intel would not to comment on the record.
In any case, the researchers say, there’s not much hope that affected computer makers can fix their fleet of existing devices.
“Unfortunately, there is nothing Microsoft can do, since we are using flaws in PC hardware vendors’ firmware,” said Segerdahl. “Intel can only do so much, their position in the ecosystem is providing a reference platform for the vendors to extend and build their new models on.”
Companies, and users, are “on their own,” said Segerdahl.
“Planning for these events is a better practice than assuming devices cannot be physically compromised by hackers because that’s obviously not the case,” he said.
If you weren’t done watching tech giants get grilled by lawmakers, mark your calendar for September 26 in what’s expected to be another riveting round of questioning.
Policy chiefs from AT&T and Charter, along with senior executives at Apple, Amazon, Google and Twitter will face questions from the Senate Commerce Committee later this month about how each company approaches safeguards to consumer privacy. The tech and telco companies will be asked to “discuss possible approaches to safeguarding privacy more effectively,” among other things.
Noticeably absent is Facebook; though the committee says the witness list is subject to change.
Committee chairman Sen. John Thune ssaid the hearing will allow the companies to “explain their approaches to privacy, how they plan to address new requirements from the European Union and California, and what Congress can do to promote clear privacy expectations without hurting innovation.”
Beyond that, it’s not clear exactly what the point of the hearing is.
A congressional source told TechCrunch to expect each company to explain for one what they could do to protect privacy outside of the law, and what role Congress can play in creating a single set of privacy requirements.
This will be the latest in a string of hearings in recent months following the Cambridge Analytica scandal, which embroiled Facebook in an exposure of millions of users’ data.
This will be the second Senate Commerce Committee hearing this year focused on the issue. Facebook chief executive Mark Zuckerberg was called to testify in April and later the Senate Intelligence Committee has held several hearings to discuss election security and disinformation campaigns around the 2018 midterm elections.
The European Union’s executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads.
The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It’s now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet.
For now the ‘rule of thumb’ regime continues to apply. But it’s putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.
As per usual EU processes, the Commission’s proposal would need to gain the backing of Member States and the EU parliament before it could be cemented into law.
One major point to note here is that existing EU law does not allow Member States to impose a general obligation on hosting service providers to monitor the information that users transmit or store. But in the proposal the Commission argues that, given the “grave risks associated with the dissemination of terrorist content”, states could be allowed to “exceptionally derogate from this principle under an EU framework”.
So it’s essentially suggesting that Europeans’ fundamental rights might not, in fact, be so fundamental. (Albeit, European judges might well take a different view — and it’s very likely the proposals could face legal challenges should they be cast into law.)
What is being suggested would also apply to any hosting service provider that offers services in the EU — “regardless of their place of establishment or their size”. So, seemingly, not just large platforms, like Facebook or YouTube, but — for example — anyone hosting a blog that includes a free-to-post comment section.
Websites that fail to promptly take down terrorist content would face fines — with the level of penalties being determined by EU Member States (Germany has already legislated to enforce social media hate speech takedowns within 24 hours, setting the maximum fine at €50M).
“Penalties are necessary to ensure the effective implementation by hosting service providers of the obligations pursuant to this Regulation,” the Commission writes, envisaging the most severe penalties being reserved for systematic failures to remove terrorist material within one hour.
It adds: “When determining whether or not financial penalties should be imposed, due account should be taken of the financial resources of the provider.” So — for example — individuals with websites who fail to moderate their comment section fast enough might not be served the very largest fines, presumably.
The proposal also encourages platforms to develop “automated detection tools” so they can take what it terms “proactive measures proportionate to the level of risk and to remove terrorist material from their services”.
So the Commission’s continued push for Internet pre-filtering is clear. (This is also a feature of the its copyright reform — which is being voted on by MEPs later today.)
Albeit, it’s not alone on that front. Earlier this year the UK government went so far as to pay an AI company to develop a terrorist propaganda detection tool that used machine learning algorithms trained to automatically detect propaganda produced by the Islamic State terror group — with a claimed “extremely high degree of accuracy”. (At the time it said it had not ruled out forcing tech giants to use it.)
What is terrorist content for the purposes of this proposals? The Commission refers to an earlier EU directive on combating terrorism — which defines the material as “information which is used to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.
And on that front you do have to wonder whether, for example, some of U.S. president Donald Trump’s comments last year after the far right rally in Charlottesville where a counter protestor was murdered by a white supremacist — in which he suggested there were “fine people” among those same murderous and violent white supremacists might not fall under that ‘glorifying the commission of terrorist offences’ umbrella, should, say, someone repost them to a comment section that was viewable in the EU…
Safe to say, even terrorist propaganda can be subjective. And the proposed regime will inevitably encourage borderline content to be taken down — having a knock-on impact upon online freedom of expression.
The Commission also wants websites and platforms to share information with law enforcement and other relevant authorities and with each other — suggesting the use of “standardised templates”, “response forms” and “authenticated submission channels” to facilitate “cooperation and the exchange of information”.
It tackles the problem of what it refers to as “erroneous removal” — i.e. content that’s removed after being reported or erroneously identified as terrorist propaganda but which is subsequently, under requested review, determined not to be — by placing an obligation on providers to have “remedies and complaint mechanisms to ensure that users can challenge the removal of their content”.
So platforms and websites will be obligated to police and judge speech — which they already do do, of course but the proposal doubles down on turning online content hosters into judges and arbiters of that same content.
The regulation also includes transparency obligations on the steps being taken against terrorist content by hosting service providers — which the Commission claims will ensure “accountability towards users, citizens and public authorities”.
Other perspectives are of course available…
There is no way a hosting provider (including your private website, if it includes comments section) can comply with these obligations without #UploadFilters. It’s not limited to large platforms. The @EU_Commission has ignored 100% of the #copyright discussion. #SaveYourInternet pic.twitter.com/WeV0GwDZVD
— Julia Reda (@Senficon) September 12, 2018
The Commission envisages all taken down content being retained by the host for a period of six months so that it could be reinstated if required, i.e. after a valid complaint — to ensure what it couches as “the effectiveness of complaint and review procedures in view of protecting freedom of expression and information”.
It also sees the retention of takedowns helping law enforcement — meaning platforms and websites will continue to be co-opted into state law enforcement and intelligence regimes, getting further saddled with the burden and cost of having to safely store and protect all this sensitive data.
(On that the EC just says: “Hosting service providers need to put in place technical and organisational safeguards to ensure the data is not used for other purposes.”)
The Commission would also create a system for monitoring the monitoring it’s proposing platforms and websites undertake — thereby further extending the proposed bureaucracy, saying it would establish a “detailed programme for monitoring the outputs, results and impacts” within one year of the regulation being applied; and report on the implementation and the transparency elements within two years; evaluating the entire functioning of it four years after it’s coming into force.
The executive body says it consulted widely ahead of forming the proposals — including running an open public consultation, carrying out a survey of 33,500 EU residents, and talking to Member States’ authorities and hosting service providers.
“By and large, most stakeholders expressed that terrorist content online is a serious societal problem affecting internet users and business models of hosting service providers,” the Commission writes. “More generally, 65% of respondent to the Eurobarometer survey considered that the internet is not safe for its users and 90% of the respondents consider it important to limit the spread of illegal content online.
“Consultations with Member States revealed that while voluntary arrangements are producing results, many see the need for binding obligations on terrorist content, a sentiment echoed in the European Council Conclusions of June 2018. While overall, the hosting service providers were in favour of the continuation of voluntary measures, they noted the potential negative effects of emerging legal fragmentation in the Union.
“Many stakeholders also noted the need to ensure that any regulatory measures for removal of content, particularly proactive measures and strict timeframes, should be balanced with safeguards for fundamental rights, notably freedom of speech. Stakeholders noted a number of necessary measures relating to transparency, accountability as well as the need for human review in deploying automated tools.”
September is Apple hardware season, where we expect new iPhones, a new Apple Watch and more. But what makes the good stuff run is the software within.
First revealed earlier this year at the company’s annual WWDC developer event in June, iOS 12 and macOS Mojave focus on a running theme: security and privacy for the masses.
Ahead of Wednesday big reveal, here’s all the good stuff to look out for.macOS Mojave
What does it do? Safari will use a new “intelligent tracking prevention” feature to prevent advertisers from following you from site to site. Even social networks like Facebook know which sites you visit because so many embed Facebook’s tools — like the comments section or the “Like” button.
Why does it matter? Tracking prevention will prevent ad firms from building a unique “fingerprint” of your browser, making it difficult to serve you targeted ads — even when you’re in incognito mode or private browsing. That’s an automatic boost for personal privacy as these companies will find it more difficult to build up profiles on you.Camera, microphone, backups now require permission
What does it do? Just like when an app asks you for access to your contacts and calendar, now Mojave will ask for permission before an app can access your FaceTime camera and microphone, as well as location data, backups and more.
Why does it matter? By expanding this feature, it’s much more difficult for apps to switch on your camera without warning or record from your microphone without you noticing. That’s going to prevent surreptitious ultrasonic ad tracking and surveillance by malware that hijack your camera. But also asking permission for access to your backups — often unencrypted — will prevent malware or hackers from quietly stealing your data.iOS 12
What does it do? iOS 12’s in-built password manager, which stores all your passwords for easy access, will now tell if you’re using the same password across different sites and apps.
Why does it matter? Password reuse is a real problem. If you use the same password on every site, it only takes one site breach to grab your password for every other site you use. iOS 12 will let you know if you’re using a weak password or the same password on different sites. Your passwords are easily accessible with your fingerprint or your passcode.Two-factor codes will be auto-filled
What does it do? When you are sent a two-factor code — such as a text message or a push notification — iOS 12 will take that code and automatically enter it into the login box.
Why does it matter? Two-factor authentication is good for security — it adds an extra layer of protection on top of your username and password. But adoption is low because two-factor is cumbersome and frustrating. This feature keeps the feature security intact while making it more seamless and less annoying.USB Restricted Mode makes hacking more difficult
What does it do? This new security feature will lock any accessories out of your device — including USB cables and headphones — when your iPhone or iPad has been locked for more than an hour.
Why does it matter? This is an optional feature — first added to iOS 11.4.1 but likely to be widely adopted with iOS 12 — will make it more difficult for law enforcement (and hackers) to plug in your device and steal your sensitive data. Because your device is encrypted, not even Apple can get your data, but some devices — like GrayKeys — can brute-force your password. This feature will render these devices largely ineffective.
Apple’s event starts Wednesday at 10am PT (1pm ET).
You know what isn’t a good look for a data management software company? A massive mismanagement of your own customer data.
Veeam, a backup and data recovery company, bills itself as a data giant that among other things can “anticipate need and meet demand, and move securely across multi-cloud infrastructures,” but is believed to have mislaid its own database of customer records.
Security researcher Bob Diachenko found an exposed database containing more than 200 gigabytes of customer records, mostly names, email addresses, and in some cases IP addresses. That might not seem like much but that data would be a goldmine for spammers or bad actors conducting phishing attacks.
Diachenko, who blogged about his latest find, the database didn’t have a password and could be accessed by anyone knowing where to look.
The database of more than 200 gigabytes — including two collections that had 199.1 million and 244.4 million email addresses and records respectively over a four-year period between 2013 and 2017. Without downloading the entire data set, it’s not know how many records are duplicates.
After TechCrunch informed the company of the exposure, the server was pulled offline within three hours.
When initially reached for comment, Veeam spokesperson Heidi Kroft said: “We will continue to conduct a deeper investigation and we will take appropriate actions based on our findings.”
Veeam says on its website that it has 307,000 customers covering most of the Fortune 500.
It’s not the first time a massive database of email addresses has leaked online. An exposed database run by River City Media leaked over 393 million email addresses in 2017, which prompted a frivolous lawsuit against the security researcher who found it. And, later in the year, a massive spambot of 711 million email addresses, believed to be largest ever, was uncovered last year by a Paris-based researcher.
Payments through the airline’s website and mobile app were stolen over the three week period, but a key clue was that travel information wasn’t affected.
Yonathan Klijnsma, a threat researcher at RiskIQ, suspected it might be the same group that was behind the Ticketmaster breach, in which hackers targeted a third-party that loaded code on Ticketmaster’s various sites. From there, it could siphon off thousands of transactions.
This time, Klijnsma said the group took an even more “highly targeted approach,” describing a wave of attacks that the “Magecart” collective has used to steal thousands of records from various sites in recent months.
“This British Airways attack was just an extension of this campaign,” he said, prior to the release of his research.
His research, out Tuesday, points to hackers injecting code directly onto the company’s website which the airline used shared on both the website and the mobile app. Using his company’s proprietary web crawling technology, he found that code hosted on the airline’s global site was compromised on August 21 — the reported date of the breach — and malicious code was injected without anyone noticing.
When a customer clicked bought plane tickets, the code would scrape the credit card information the open payment page and forward the data to a fake site run by the hackers from a private server in Romania.
Names, billing address, email address, and all bank card details were collected by the code.
“This attack is a simple but highly targeted approach compared to what we’ve seen in the past with the Magecart skimmer which grabbed forms indiscriminately,” said Klijnsma. “This particular skimmer is very much attuned to how British Airway’s payment page is set up, which tells us that the attackers carefully considered how to target this site instead of blindly injecting the regular Magecart skimmer.”
That would explain why the financial data was collected but not the travel and passport data. It also explains why the mobile app was affected, Klijnsma said, because an analysis of the mobile app also loaded the same data-scraping script.
“There’s so many ways they could have stolen the payment or [personal] information, they went for this really simple method, but its super effective,” said Klijnsma.
But, he said, “they went from super advanced to simplifying their attacks — and their [returns are] more insane than ever.”
British Airways spokesperson Liza Ravenscroft declined to comment citing an ongoing criminal investigation.
New York prosecutors have extradited a Russian hacker accused of breaking into JP Morgan, one of the world’s largest banking institutions.
Moscow resident Andrei Tiurin, 35, was charged Friday after he was extradited from neighboring Georgia, with the theft of over 80 million records from the bank in 2014. The alleged hacker is said to have been under the direction of Gery Shalon, who was separately indicted a year later following the breach.
Tiurin was also charged wire and securities fraud, and aggravated identity theft, racking up the maximum possible prison time to over 80 years.
“Andrei Tyurin, a Russian national, is alleged to have participated in a global hacking campaign that targeted major financial institutions, brokerage firms, news agencies, and other companies,” said Manhattan U.S. attorney Geoffrey Berman in remarks.
Although the indictment did not name the New York-based financial news agency, The Wall Street Journal previously reported the victim as its parent company Dow Jones, following the following the first round of charges in 2015.
The hackers allegedly targeted other firms, including an unnamed Boston, Mass.-based mutual fund and online stock brokerage firm. The indictment said that the hackers exploited the “Heartbleed” vulnerability — a known flaw in the widely used OpenSSL cryptographic library — to gain a foothold into the institution’s network.
Tiurin was also accused of trying to artificially inflate the “price of certain stocks publicly traded in the United States,” and obtained “hundreds of millions of dollars in illicit proceeds” from various hacking campaigns.
“Today’s extradition marks a significant milestone for law enforcement in the fight against cyber intrusions targeting our critical financial institutions,” said Berman.
Dow Jones declined to comment. JP Morgan did not immediately respond to a request for comment.
A lot can change in a year. Not when you’re Equifax.
The credit rating giant, one of the largest in the world, was trusted with some of the most sensitive data used by banks and financiers to determine who can be lent money. But the company failed to patch a web server it knew was vulnerable for months, which let hackers crash the servers and steal data on 147 million consumers. Names, addresses, Social Security numbers and more — and millions more driver license and credit card numbers were stolen in the breach. Millions of British and Canadian nationals were also affected, sparking a global response to the breach.
It was “one of the most egregious examples of corporate malfeasance since Enron,” said Senate Democratic leader Chuck Schumer at the time.
Yet, a year on from following the devastating hack that left the company reeling from a breach of almost every American adult, the company has faced little to no action or repercussions.
In the aftermath, the company’s response to the breach was chaotic, sending consumers scrambling to learn if they were affected but were instead led into a broken site that was vulnerable to hacking. And when consumers were looking for answers, Equifax’s own Twitter account sent concerned users to a site that easily could have been a phishing page had it not been for a good samaritan.
Yet, the company went unpunished. In the end, Equifax was in law as much a victim as the 147 million Americans.
“There was a failure of the company, but also of lawmakers,” said Mark Warner, a Democratic senator, in a call with TechCrunch. Warner, who serves Virginia, was one of the first lawmakers to file new legislation after the breach. Alongside his Democratic colleague, Sen. Elizabeth Warren, the two senators said their bill, if passed, would hold credit agencies accountable for data breaches.
“With Equifax, they knew for months before they reported, so at what point is that violating securities laws by not having that notice?,” said Warner.“There was a failure of the company, but also of lawmakers.”
Sen. Mark Warner (D-VA)
“The message sent to the market is ‘if you can endure some media blowback, you can get through this without serious long-term ramifications’, and that’s totally unacceptable,” he said.
Lawmakers held hearings and grilled the company’s former chief executive, Richard Smith, who retired with his full $90 million retirement package, adding insult to injury. Equifax further shuffled its executive suite, including the hiring of a new chief information security officer Jamil Farshchi and former lawyer turned “chief transformation officer” Julia Houston to oversee “the company’s response to the cybersecurity incident.”
Equifax declined to make either executive available for interview or comment when reached by TechCrunch, but Equifax spokesperson Wyatt Jefferies said protecting customer data is the company’s “top priority.”
But there’s not much to show for it beyond superficial gestures of free credit monitoring — provided by Equifax, no less — and a credit locking app which, unsurprisingly, had its own flaws. In the year since, the company has spent more than $240 million — some $50 million was covered by cyber-insurance. That’s a drop in the ocean to more than $3 billion in revenue in the year since, according to quarterly earnings filings — or more than $500 million in profits. And although Equifax’s stock price initially collapsed in the weeks following, the price bounced back.
Financially, the company looks almost as healthy as it’s ever been. But that may change.
Earlier this year, the company asked a federal judge to reject claims from dozens of banks and credit unions for costs taken to prevent fraud following the data breach. The claims, if accepted, could force Equifax to shell out tens of millions of dollars — perhaps more. The hundreds of class action suits filed to date have yet to hit the courts, but historically even the largest class action cases have resulted in single dollar amounts for the individuals affected.
And when the credit agent giant isn’t fighting the courts, federal regulators have shown little interest in pursuit of legal action.
An investigation launched by a former head of the Consumer Financial Protection Bureau, responsible for protecting consumers from fraud, sputtered after the new director reportedly declined to pursue the company. And, although the company is under investigation by the Federal Trade Commission for the second time this decade, fines are likely to be limited — if levied at all.
Warren sent a letter Thursday to the heads of both agencies lamenting their lack of action.
“Companies like Equifax do not ask the American people before they collect their most sensitive information,” said Warren. “This information can determine their ability to access credit, obtain a job, secure a home loan, purchase a car, and make dozens of other transactions that are critical to their personal financial security.”
“The American people deserve an update on your investigations,” she said.
To date, only the Securities and Exchange Commission has brought charges — not for the breach itself, but against three former staffers for allegedly insider trading.
Escaping any local action, Equifax agreed with eight states, including New York and California, to take further cybersecurity steps and measures to prevent another breach, escaping any fines or financial penalties.“The American people deserve an update on your investigations”
Sen. Elizabeth Warren (D-MA)
Warner blamed much of the inaction to the patchwork of data breach laws that vary by state.
“We’ve got different laws and you don’t have any standard, and part of the challenge around the data breach is that every industry wants to be exempted,” said Warner. It’s not a partisan issue, he said, but one where every industry — from telecoms to retail — wants to be exempt from the law.
“If we really want to improve our business cyber-hygiene, you have got to have consequences for failing to keep up those cyber-hygiene standards,” he said.
It’s a tough sell to posit Equifax, which fluffed almost every step of the breach process, before and after its disclosure, as a victim. While the millions affected can take solace in the beating Equifax got in the press, those demanding regulatory action might be in for a disappointingly long wait.
A group of security researchers say dozens of popular iPhone apps are quietly sharing the location data of “tens of millions of mobile devices” with third-party data monetization firms.
Almost all require access to a user’s location data to work properly, like weather and fitness apps, but share that data often as a way to generate revenue for free-to-download apps.
In many cases, the apps send precise locations and other sensitive, identifiable data “at all times, constantly,” and often with “little to no mention” that location data will be shared with third-parties, say security researchers at the GuardianApp project.
“I believe people should be able to use any app they wish on their phone without fear that granting access to sensitive data may mean that this data will be quietly sent off to some entity who they do not know and do not have any desire to do business with,” said Will Strafach, one of the researchers.
Using tools to monitor network traffic, the researchers found 24 popular iPhone apps that were collecting location data — like Bluetooth beacons to Wi-Fi network names — to know where a person is and where they visit. These data monetization firms also collect other device data from the accelerometer, battery charge status and cell network names.
In exchange for data, often these data firms pay app developers to collect data and grow their databases and often to deliver ads based on a person’s location history.
But although many claim they don’t collect personally identifiable information, Strafach said that latitude and longitude coordinates can pin a person to a house or their work.
To name a few:
ASKfm, a teen-focused anonymous question-and-answer app, has 1,400 ratings on the Apple App Store and touts tens of millions of users. It asks for access to a user’s location that “won’t be shared with anyone.” But the app sends that location data to two data firms, AreaMetrics and Huq. When reached, the app maker said it believes its location collection practices “fit industry standards, and are therefore acceptable for our users.”
NOAA Weather Radar has more than 266,000 reviews and has millions of downloads. Access to your location “is used to provide weather info.” But an earlier version of the app from March was sending location data to three firms, Factual, Sense360 and Teemo. The code has since been removed. A spokesperson for Apalon, which built the app, said it “conducted a limited, brief test with a few of these providers” earlier this year.
Homes.com is a popular app that asks that you switch on your location to help “find nearby homes.” But the code, thought to be old code, still sends precise coordinates to AreaMetrics. The app maker said it used AreaMetrics “for a short period” last year but said the code was deactivated.
And the list goes on — including more than a hundred Sinclair-owned local news and weather apps, which share location data with Reveal, a data tracking and monetization firm, which the company says will help the media giant bolster its sales by “providing advertisers with target audiences.”
That can quickly become a lucrative business for developers with popular apps and monetization firms alike, some of which collect billions of locations each day.
Most of the data monetization firms deny any wrongdoing and say that users can opt out at any time. Most said that they demand that app makers explicitly state that they require app developers to explicitly state that they are collecting and sending data to third-party firms.
The team’s research shows that those requirements are almost never verified.
Sense360 said the data it collects is anonymous and requires apps to get explicit consent from its users, but Strafach said few apps he’s seen contained text that sought assurances. But the company did not answer a specific question why it no longer works with certain apps. Wireless Registry said it also requires apps seek consent from users, but would not comment on the security measures it uses to ensure user privacy. And in remarks, inMarket said it follows advertising standards and guidelines.
Cuebiq claims to use an “advanced cryptography method” to store and transmit data, but Strafach said he found “no evidence” that any data was scrambled. It says it’s not a “tracker” but says while some app developers look to monetize users’ data, most are said to use it for insights. And, Factual said it uses location data for advertising and analytics, but must obtain in-app consent from users.
“None of these companies appear to be legally accountable for their claims and practices, instead there is some sort of self-regulation they claim to enforce,” said Strafach.
He said there isn’t much users can do, but limiting ad tracking in your iPhone’s privacy settings can make it more difficult for location trackers to identify users.
Apple’s crackdown on apps that don’t have privacy policies kicks in next month. But given how few people read them in the first place, don’t expect apps to change their behavior any time soon.
Following this release, Orfox, the longstanding Tor Project -approved browsing app for Android, said it will be sunset by 2019. To run both apps, users will need to also download the Tor Project proxy app, Orbot.
Tor Project’s anonymous browser uses a system of decentralized relays that bounce a user’s data to anonymize internet activity. This makes it almost impossible for ads, location trackers and even government surveillance to follow your tracks across the web. While Tor can often associated with illegal drug or weapons sales on the dark web, the browser is also a haven for political dissidents, journalists and just browsers who prefer to maintain their anonymity.
This release comes just several days after Tor Project rolled out Tor Browser 8.0, based on Firefox’s 2017 Quantum browser structure. The major updates include a new user landing and on-boarding page, increased language support and improved bridging methods to enable users to access the browser in countries where Tor is banned — like Turkey.
While the service regarded current gold standard of anonymous browsing, there are still vulnerabilities. Federal investigators can gain access and identify users through security flaws in the browser itself. It remains to be seen how secure Firefox Quantum will be for Tor 8.0, but users would be wise to follow Tor’s guidelines on further protecting their anonymity just in case.
Sonatype, a cybersecurity-focused open-source company, has raised $80 million from investment firm TPG.
The company said the financing will help extend its Nexus platform, which it touts as an enterprise ready repository manager and library, which among other things tracks code and helps to keep everything in the devops pipeline up-to-date and secure.
It’s that kind of technology that Sonatype says can prevent another Equifax -style breach of over 147 million consumers’ data. Earlier this year, the company found over dozens of Fortune Global 100 companies that downloaded outdated and vulnerable versions of Apache Struts, which Equifax failed to patch or update.
Sonatype’s chief executive Wayne Jackson his company can help prevent those type of breaches.
“We monitor literally millions of open source commits per day,” he told TechCrunch. “Last year hundreds of billions of components were downloaded by software developers, 12 percent of which had known security defects.”
The funding will go to extend the company’s Nexus platform, Jackson said.
The company said it’s had an 81 percent increase in year-over-year sales in the first-half of the year, and 1.5 million users added to its flagship Nexus platform since January. In all, the company has more than 10 million software developers and 1,000 enterprises on Nexus worldwide.
Sonatype’s last round of funding was in 2016, led by Goldman Sachs, snagging $30 million.
A popular top-tier app in Apple’s Mac App Store was found pilfering browser histories from anyone who downloads it.
Yet still, at the time of writing, the rogue app — Adware Doctor — stands as the No.1 grossing paid app in the app store’s utilities categories. But Apple was warned weeks ago and did nothing to pull the app offline.
Now it seems Apple has pulled the app. A spokesperson did not return requests for comment.
Apple’s walled garden approach to Mac and iPhone security is almost entirely based on the inability to install apps outside the app store, which Apple monitors closely. While it’s not uncommon to hear of dangerous apps slipping into Google’s Play store, it’s nearly unheard of for Apple to face the same fate. Any app that doesn’t meet the company’s strict security and sometimes moral criteria will be rejected, and users won’t able to install it.
This app promises to “keep your Mac safe” and “get rid of annoying pop-up ads” — and even “discover and remove threats on your Mac.” But what the app won’t tell you is that for just a few bucks it’ll steal and download your browser history — including all the sites you’ve searched for or accessed — to servers in China run by the app’s makers.
Thanks in part to a video posted last month on YouTube and with help from security firm Malwarebytes, it’s now clear what the app is up to.
Wardle found that the downloaded app jumped through hoops to bypass Apple’s Mac sandboxing features, which prevents apps from grabbing data on the hard drive, and upload a user’s browser history on Chrome, Firefox and Safari browsers.
Wardle found that the app, thanks to Apple’s own flawed vetting, could request access to the user’s home directory and its files. That isn’t out of the ordinary, Wardle says, because tools that market themselves as anti-malware or anti-adware expect access to the user’s files to scan for problems. When a user allows that access, the app can detect and clean adware — but if found to be malicious, it can “collect and exfiltrate any user file,” said Wardle.
Once the data is collected, it’s zipped into an archive file and sent to a domain based in China.
Wardle said that for some reason in the last few days the China-based domain went offline. At the time of writing, TechCrunch confirmed that the domain wouldn’t resolve — in other words, it was still down.
“Let’s face it, your browsing history provides a glimpse into almost every aspect of your life,” said Wardle’s post. “And people have even been convicted based largely on their internet searches!”
He said that the app’s access to such data “is clearly based on deceiving the user.”
Apple was contacted weeks ago. The email it responded with, in not so many words, said “we can’t tell you anything,” but forwarded the feedback.
A meagre $4.99 for the app may not seem much to the average user, but it’s a heavy price to pay for having the app steal your browser history — which users will never get back. And given that Apple makes a 30 percent cut of every purchase of this popular app, there isn’t much financial incentive to withdraw the app from the store.
Updated at 9:05am PT: with confirmation that the app has been pulled.
As someone who’s had a years-long front-row seat to Russia’s efforts to influence U.S. politics, former Facebook Chief Security Officer Alex Stamos has a pretty solid read on what we can expect from the 2018 midterms. Stamos left the company last month to work on cybersecurity education at Stanford.
“If there’s no foreign interference during the midterms, it’s not because we did a great job,” Stamos said in an interview with TechCrunch at Disrupt SF on Thursday. “It’s because our adversaries decided to [show] a little forbearance, which is unfortunate.”
As Stamos sees it, there is an alternative reality in which the U.S. electorate would be better off heading into its next major nationwide voting day, but critical steps haven’t been taken.
“As a society, we have not responded to the 2016 election in the way that would’ve been necessary to have a more trustworthy midterms,” he said. “There have been positive changes, but overall security of campaigns [is] not that much better, and the actual election infrastructure isn’t much better.”
Stamos believes that it’s important to remember that foreign adversaries can’t dictate the outcome of an election with any kind of guarantee. What they can do — and what he calls his “big fear” — is that they can still mess everything up in a way that calls the entire system into question.
“In most cases, throwing an election one way or another is going to be very difficult for a foreign adversary, but throwing any election into chaos is totally doable right now,” he said. “That’s where we haven’t moved forwards. ”
Stamos gave examples of attacks on voter registration sites that lose voter data or denial-of-service attacks on the day of elections.
“With a disinformation campaign at the same time, you can make it so that you have half the country that thinks the election was thrown,” he said.
To a foreign adversary seeking to undermine U.S. democracy, creating that kind of doubt isn’t very technically difficult. Even with no votes changed and no voting systems breached, a little doubt goes a very long way toward accomplishing the same goals as a more sophisticated hacking campaign.
Stamos cites new ad funding disclosures as one substantive change that will help make U.S. democracy healthier, but more efforts need to be taken.
“Russian interference or not, we do not want a future where campaigns and candidates are cutting up the electorate into smaller and smaller pieces — so I think ad transparency is the first step there,” he said.
In some cases, those efforts will require a major shift in the way both the U.S. government and private social media companies have conducted themselves. For one, as he wrote in Lawfare, the U.S. needs “an independent, defense-only cybersecurity agency with no intelligence, military or law enforcement responsibility” rather than a patchwork of agencies each partially responsible for cybersecurity defense.
The news may not be great for 2018, but a strong dose of realism now will amplify the clarion call to do better before 2020.