In cities across Russia, large boxes in locked rooms are directly connected to the networks of some of the country’s largest phone and internet companies.
These unsuspecting boxes, some the size of a washing machine, house equipment that gives the Russian security services access to the calls and messages of millions of citizens. This government surveillance system remains largely shrouded in secrecy, even though phone and web companies operating in Russia are forced by law to install these large devices on their networks.
But documents seen by TechCrunch offer new insight into the scope and scale of the Russian surveillance system — known as SORM (Russian: COPM) — and how Russian authorities gain access to the calls, messages and data of customers of the country’s largest phone provider, Mobile TeleSystems (MTS) .
The documents were found on an unprotected backup drive owned by an employee of Nokia Networks (formerly Nokia Siemens Networks), which through a decade-long relationship maintains and upgrades MTS’s network — and ensures its compliance with SORM.
Chris Vickery, director of cyber risk research at security firm UpGuard, found the exposed files and reported the security lapse to Nokia. In a report out Wednesday, UpGuard said Nokia secured the exposed drive four days later.
“A current employee connected a USB drive that contained old work documents to his home computer,” said Nokia spokesperson Katja Antila in a statement. “Due to a configuration mistake, his PC and the USB drive connected to it was accessible from the internet without authentication.”
“After this came to our attention, we contacted the employee and the machine was disconnected and brought to Nokia,” the spokesperson said.
Nokia said its investigation is ongoing.‘Lawful intercept’
The exposed data — close to 2 terabytes in size — contain mostly internal Nokia files.
But a portion of the documents seen by TechCrunch reveals Nokia’s involvement in providing “lawful intercept” capabilities to phone and internet providers, which Russia mandates by law.
SORM, an acronym for “system for operative investigative activities,” was first developed in 1995 as a lawful intercept system to allow the Federal Security Services (FSB, formerly the KGB) to access telecoms data, including call logs and content of Russians. Changes to the law over the last decade saw the government’s surveillance powers expand to internet providers and web companies, which were compelled to install SORM equipment to allow the interception of web traffic and emails. Tech companies, including messaging apps like Telegram, also have to comply with the law. The state internet regulator, Roskomnadzor, has fined several companies for not installing SORM equipment.
Since the system’s expansion in recent years, several government agencies and police departments can now access citizens’ private data with SORM.
Most countries, including the U.S. and the U.K., have laws to force telecom operators to install lawful intercept equipment so security services can access phone records in compliance with local laws. That’s enabled an entirely new industry of tech companies, primarily network equipment providers like Nokia, to build and install technologies on telecom networks that facilitate lawful intercepts.
Alexander Isavnin, an expert at Roskomsvoboda and the Internet Protection Society, told TechCrunch that work related to SORM, however, is “classified” and requires engineers to obtain special certifications for work. He added that it’s not uncommon for the FSB to demand telecom and internet companies buy and use SORM equipment from a pre-approved company of its choosing.
The documents show that between 2016 and 2017, Nokia planned and proposed changes to MTS’s network as part of the telecom giant’s “modernization” effort.
Nokia planned to improve a number of local MTS-owned phone exchanges in several Russian cities — including Belgorod, Kursk and Voronezh — to comply with the latest changes to the country’s surveillance laws.
TechCrunch reviewed the documents, which included several floor plans and network diagrams for the local exchanges. The documents also show that the installed SORM device on each phone network has direct access to the data that passes through each phone exchange, including calls, messages and data.
The plans contain the physical address — including floor number — of each phone exchange, as well as the location of each locked room with SORM equipment in large bold red font, labeled “COPM.” One document was titled “COPM equipment installation [at] MTS’ mobile switching center,” a core function for handling calls on a cell network.
One photo showed the inside of one of the SORM rooms, containing the sealed box containing intercept equipment with the letters “COPM” in large font on the metal cabinet next to an air-conditioning unit to keep the equipment cool.
Nokia says it provides — and its engineers install — the “port” in the network to allow lawful intercept equipment to plug in and intercept data pursuant to a court order, but denied storing, analyzing or processing intercepted data.
That’s where other companies come into play. Russian lawful intercept equipment maker Malvin Systems provides SORM-compatible technology that sits on top of the “port” created by Nokia. That compatible technology allows the collection and storage of citizens’ data.
“As it is a standard requirement for lawful interception in Russia and SORM providers must be approved by the appropriate authorities, we work with other companies to enable SORM capabilities in the networks that we provide,” said Nokia’s spokesperson, who confirmed Malvin as one of those companies.
Nokia’s logo was on Malvin’s website at the time of writing. A representative for Malvin did not return a request for comment.
Another set of documents shows that the “modernized” SORM capabilities on MTS’s network also allows the government access to the telecom’s home location register (HLR) database, which contains records on each subscriber allowed to use the cell network, including their international mobile subscriber identity (IMSI) and SIM card details.
The documents also make reference to Signalling System 7 (SS7), a protocol critical to allowing cell networks to establish and route calls and text messages. The protocol has widely been shown not to be secure and has led to hacking.
MTS spokesperson Elena Kokhanovskaya did not respond to several emails requesting comment.‘Bulk wiretapping’
Lawful intercept, as its name suggests, allows a government to lawfully acquire data for investigations and countering terrorism.
But as much as it’s recognized that it’s necessary and obligatory in most Western countries — including the U.S. — some have expressed concern at how Russia rolled out and interprets its lawful intercept powers.
Russia has long faced allegations of human rights abuses. In recent years, the Kremlin has cracked down on companies that don’t store citizens’ data within its borders — in some cases actively blocking Western companies like LinkedIn for failing to comply. The country also has limited freedom of speech, expression and dissidents, and activists are often arrested for speaking out.
“The companies will always say that with lawful interception, they’re complying with the rule of law,” said Adrian Shahbaz, research director for technology and democracy at Freedom House, a civil liberties and rights watchdog. “But it’s clear when you look at how Russian authorities are using this type of apparatus that it goes far beyond what is normal in a democratic society.”
For Nokia’s part, it says its lawful intercept technology allows telecom companies — like MTS — to “respond to interception requests on targeted individuals received from the legal authority through functionality in our solutions.”
But critics say Russia’s surveillance program is flawed and puts citizens at risk.
“In Russia, the operator installs it and have no control over what is being wiretapped Only the FSB knows what they collect.”
Alexander Isavnin, expert
Isavnin, who reviewed and translated some of the files TechCrunch has seen, said Russia’s view of lawful intercept goes far beyond other Western nations with similar laws. He described SORM as “bulk wiretapping.”
He said in the U.S., the Communications Assistance for Law Enforcement Act (CALEA) requires a company to verify the validity of a wiretap order. “In Russia, the operator installs it and have no control over what is being wiretapped,” he said. The law states that the telecom operation is “not able to determine what data is being wiretapped,” he said.
“Only the FSB knows what they collect,” he said. “There is no third-party scrutiny.
Nokia denied wrongdoing, and said it is “committed” to supporting human rights.
Nokia chief marketing officer David French told TechCrunch in a call that Nokia uses a list of countries that are considered “high-risk” on human rights before it sells equipment that could be used for surveillance.
“When we see a match between a technology that we think has potential risk and a country that has potential risk, we have a process where we review it internally and decide to go forward with the sale,” said French.
When pressed, French declined to say whether Russia was on that list. He added that any equipment that Nokia provides to MTS is covered under non-disclosure agreements.
A spokesperson for the Russian consulate in New York could not be reached by phone prior to publication.
This latest security lapse is the second involving SORM in recent months. In August, a developer found thousands of names, numbers, addresses and geolocations said to have leaked from SORM devices. Using open-source tools, Russian developer Leonid Evdokimov found dozens of “suspicious packet sniffers” in the networks of several Russian internet providers.
It took more than a year for the internet providers to patch those systems.
Ingrid Lunden contributed translations and reporting.
Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.
How do you keep your startup secure?
That’s the big question we explored at TC Sessions: Enterprise earlier this month. No matter the size, every startup is an enterprise. Every startup will grow in size as it builds out. But as a company expands, that rapid growth can lead to a distraction from the foundational principle of any modern company — keeping it secure.
Security isn’t just a buzzword. As some of the largest companies in Silicon Valley have shown, security can be difficult. From storing passwords in plaintext to data breaches galore, how can startups learn from some of the biggest security lapses in the tech industry’s history?
Our panel consisted of three of the brightest minds in enterprise security: Wendy Nather, head of advisory CISOs at Duo Security, is an enterprise security expert; Martin Casado, general partner at Andreessen Horowitz, is a security and enterprise startup investor; and Emily Heath, United’s chief information security officer, oversees the security operations of the largest U.S. airlines.
This is what advice they had.Security from the very start
As more enterprise developers make use of open source, it becomes increasingly important for companies to make sure that they are complying with licensing requirements. They also need to ensure the open sources bits are being updated over time for security purposes. That’s where FOSSA comes in, and today the company announced an $8.5 million Series A.
The round was led by Bain Capital Ventures with help from Costanoa Ventures and Norwest Venture Partners. Today’s round brings the total raised to $11 million, according to the company.
Company founder and CEO Kevin Wang says that over the last 18 months, the startup has concentrated on building tools to help enterprises comply with their growing use of open source in a safe and legal way. He says that overall this increasing use of open source great news for developers, and for these bigger companies in general. While it enables them to take advantage of all the innovation going on in the open source community, they need to make sure they are in compliance.
“The enterprise is really early on this journey, and that’s where we come in. We provide a platform to help the enterprise manage open source usage at scale,” Wang explained. That involves three main pieces. First it tracks all of the open source and third-party code being used inside a company. Next, it enforces licensing and security policy, and finally, it has a reporting component. “We automate the mass reporting and compliance for all of the housekeeping that comes from using open source at scale,” he said.
The enterprise focus is relatively new for the company. It originally launched in 2017 as a tool for developers to track individual use of open source inside their programs. Wang saw a huge opportunity inside the enterprise to apply this same kind of capability inside larger organizations, who were hungry for tools to help them comply with the myriad open source licenses out there.
“We found that there was no tooling out there that can manage the scale and breadth across all the different enterprise use cases and all the really complex mission-critical code bases,” he said. What’s more, he found that where there were existing tools, they were vastly underutilized or didn’t provide broad enough coverage.
The company announced a $2.2 million seed round in 2017, and since then has grown from 10 to 40 employees. With today’s funding, that should increase as the company is expanding quickly. Wang reports that the startup has been tripling its revenue numbers and customer accounts year over year. The new money should help accelerate that growth and expand the product and markets it can sell into.
Anti-fraud startup Shape Security has tipped over the $1 billion valuation mark following its latest Series F round of $51 million.
The Mountain View, Calif.-based company announced the fundraise Thursday, bringing the total amount of outside investment to $173 million since the company debuted in 2011.
C5 Capital led the round along with several other new and returning investors, including Kleiner Perkins, HPE Growth, and Norwest Ventures Partners.
Shape Security protects companies against automated and imitation attacks, which often employ bots to break into networks using stolen or reused credentials. Shape uses artificial intelligence to discern bots from ordinary users by comparing known information such as a user’s location, and collected data like mouse movements to shut down attempted automated logins in real-time.
The company said it now protects against two billion fraudulent logins daily.
C5 managing partner André Pienaar said he believes Shape will become the “definitive” anti-fraud platform for the world’s largest companies.
“While we while we expect a strong financial return, we also believe that we can bring Shape’s platform into many of the leading companies in Europe who look to us for strategic ideas that benefit the entire value-chain where B2C applications are used,” Pienaar told TechCrunch.
Shape’s chief executive Derek Smith said the $51 million injection will go towards the company’s international expansion and product development — particularly the capabilities of its AI system.
He added that Shape was preparing for an IPO.
Web feature developers are being warned to step up attention to privacy and security as they design contributions.
Writing in a blog post about “evolving threats” to Internet users’ privacy and security, the W3C standards body’s technical architecture group (TAG) and Privacy Interest Group (PING) set out a series of revisions to the W3C’s Security and Privacy Questionnaire for web feature developers.
The questionnaire itself is not new. But the latest updates place greater emphasis on the need for contributors to assess and mitigate privacy impacts, with developers warned that “features may not be implemented if risks are found impossible or unsatisfactorily mitigated”.
In the blog post, independent researcher Lukasz Olejnik, currently serving as an invited expert at the W3C TAG; and Apple’s Jason Novak, representing the PING, write that the intent with the update is to make it “clear that feature developers should consider security and privacy early in the feature’s lifecycle” [emphasis theirs].
“The TAG will be carefully considering the security and privacy of a feature in their design reviews,” they further warn, adding: “A security and privacy considerations section of a specification is more than answers to the questionnaire.”
Security & privacy to be considered early in the web/browser feature’s lifecycle. New high level type of threat "legitimate misuse": just because something is technically possible does not mean it was designed for abuse and it is OK to do so
— Lukasz Olejnik (@lukOlejnik) September 11, 2019
The revisions to the questionnaire include updates to the threat model and specific threats a specification author should consider — including a new high level type of threat dubbed “legitimate misuse“, where the document stipulates that: “When designing a specification with security and privacy in mind, all both use and misuse cases should be in scope.”
“Including this threat into the Security and Privacy Questionnaire is meant to highlight that just because a feature is possible does not mean that the feature should necessarily be developed, particularly if the benefitting audience is outnumbered by the adversely impacted audience, especially in the long term,” they write. “As a result, one mitigation for the privacy impact of a feature is for a user agent to drop the feature (or not implement it).”
“Features should be secure and private by default and issues mitigated in their design,” they further emphasize. “User agents should not be afraid of undermining their users’ privacy by implementing new web standards or need to resort to breaking specifications in implementation to preserve user privacy.”
The pair also urge specification authors to avoid blanket treatment of first and third parties, suggesting: “Specification authors may want to consider first and third parties separately in their feature to protect user security and privacy.”
The revisions to the questionnaire come at a time when browser makers are dialling up their response to privacy threats — encouraged by rising public awareness of the risks posed by data leaks, as well as increased regulatory action on data protection.
Last month the open source WebKit browser engine (which underpins Apple’s Safari browser) announced a new tracking prevention policy that takes the strictest line yet on background and cross-site tracking, saying it would treat attempts to circumvent the policy as akin to hacking — essentially putting privacy protection on a par with security.
Earlier this month Mozilla also pushed out an update to its Firefox browser that enables an anti-tracking cookie feature across the board, for existing users too — demoting third party cookies to default junk.
Even Google’s Chrome browser has made some tentative steps towards enhancing privacy — announcing changes to how it handles cookies earlier this year. Though the adtech giant has studiously avoided flipping on privacy by default in Chrome where third party tracking cookies are concerned, leading to accusations that the move is mostly privacy-washing.
More recently Google announced a long term plan to involve its Chromium browser engine in developing a new open standard for privacy — sparking concerns it’s trying to both kick the can on privacy protection and muddy the waters by shaping and pushing self-interested definitions which align with its core data-mining business interests.
There’s more activity to consider too. Earlier this year another data-mining adtech giant, Facebook, made its first major API contribution to Google’s Chrome browser — which it also brought to the W3C Performance Working Group.
Facebook does not have its own browser, of course. Which means that authoring contributions to web technologies offers the company an alternative conduit to try to influence Internet architecture in its favor.
The W3C TAG’s latest move to focus minds on privacy and security by default is timely.
It chimes with a wider industry shift towards pro-actively defending user data, and should rule out any rubberstamping of tech giants contributions to Internet architecture which is obviously a good thing. Scrutiny remains the best defence against self-interest.
Last week, users around the world found Wikipedia down after the online, crowdsourced encyclopedia became the target of a massive, sustained DDoS attack — one that it is still actively fighting several days later (even though the site is now back up). Now, in a coincidental twist of timing, Wikipedia’s parent, the Wikimedia Foundation, is announcing a donation aimed at helping the group better cope with situations just like this: Craig Newmark Philanthropies, a charity funded by the Craigslist founder, is giving $2.5 million to Wikimedia to help it improve its security.
The gift would have been in the works before the security breach last week, and it underscores a persistent paradox. The non-profit is considered to be one of the 10 most popular sites on the web, with people from some 1 billion different devices accessing it each month, with upwards of 18 billion visits in that period (the latter figure is from 2016 so likely now higher). Wikipedia is used as reference point by millions every day to get the facts on everything from Apple to Zynga, mushrooms and Myanmar history, and as a wiki, it was built from the start for interactivity.
But in this day and age when anything is game for malicious hackers, it’s an easy target, sitting out in the open and generally lacking in the kinds of funds that private companies and other for-profit entities have to protect themselves from security breaches. Alongside networks of volunteers who put in free time to contribute security work to Wikimedia, the organization only had two people on its security staff two years ago — one of them part-time.
That has been getting fixed, very gradually, by John Bennett, the Wikimedia Foundation’s Director of Security who joined the organization in January 2018, and told TechCrunch in an interview that he’s been working on a more cenrtralised and coherent system, bringing on more staff to help build both tools to combat nefarious activity both on the site and on Wikimedia’s systems; and crucially, put policies in place to help prevent breaches in the future.
“We’ve lived in this bubble of ‘no one is out to get us,'” he said of the general goodwill that surrounds not-for-profit, public organizations like the Wikimedia Foundation. “But we’re definitely seeing that change. We have skilled and determined attackers wishing to do harm to us. So we’re very grateful for this gift to bolster our efforts.
“We weren’t a sitting duck before the breach last week, with a lot of security capabilities built up. But this gift will help improve our posture and build upon on what we started and have been building these last two years.”
The security team collaborates with other parts of the organization to handle some of the more pointed issues. He notes that Wikimedia uses a lot of machine learning that has been developed to monitor pages for vandalism, and an anti-harassment team also works alongside them. (Newmark’s contribution today, in fact, is not the first donation he’s made to the organization. In the past he has donated around $2 million towards various projects including the Community Health Initiative, the anti-harassment program; and the more general Wikimedia Endowment).
The security breach that caused the DDoS is currently being responded to by the site reliability engineering team, who are still engaged and monitoring the situation, and Bennett declined to comment more on that.
You can support Wikipedia and Wikimedia, too.
Mobile messaging app Telegram has fixed a bug allowing users to recover photos and videos ‘unsent’ by other people.
Telegram, which has more than 100 million users, has an ephemeral messaging feature that allows users to “unsend” sent messages from other people’s inboxes, such as when a message is sent by mistake.
But one security researcher Dhiraj Mishra, who found the privacy issue and shared his findings exclusively with TechCrunch, said although Telegram was removing the messages from a user’s device, any sent photos or video would still be stored on the user’s phone.
The researcher found other messaging apps, like WhatsApp, had the same ephemeral “unsend” feature but when tested deleted both message and content.
Mishra said the Android version of Telegram would permanently store photos and videos in the device’s internal storage.
“This works perfectly in groups as well,” he told TechCrunch. “If you have a Telegram group of 100,000 members and you send a media message by mistake and you delete it, it only gets deleted from the chat but will remain in media storage of all 100,000 members,” he said.
It’s not known if Telegram users have been affected by the privacy issue. But recently we reported several cases of visa holders who have been denied entry to the U.S. for content on their phones sent by other people.
After TechCrunch reached out, Telegram fixed the vulnerability. Mishra received €2,500 from the bug bounty for discovering and disclosing the vulnerability.
A spokesperson for Telegram confirmed the bug fix had rolled on September 5.
Apple has issued a tart response to an extensive report by Google of a serious security flaw in iOS. The flaw, which let an attacker gain root access to a device visiting a malicious website, was reported last week. Apple wants to “make sure all of our customers have the facts,” which is funny, because it’s likely we wouldn’t have any of the facts if Google had not so rigorously documented this issue.
In a brief news post, Apple says that it has heard concerns from its customers and wants to make sure they know they are not at risk.
The attack, Apple says, was “narrowly focused” and not an exploit “en masse.” “The attack affected fewer than a dozen websites that focus on content related to the Uighur community,” Apple wrote.
While it’s true that only a small number of websites were affected, Google said that those websites were visited thousands of times per week — and the attacks were active for about two months. Even a conservative estimate based on these numbers suggests more than a hundred thousand devices could easily have been probed and, if vulnerable, infected. If only 1 in 100 were iPhones, that would be root access to a thousand of the target population. That rock-bottom estimate already sounds pretty “en masse” to me.
Furthermore, while it may make the non-Uighurs among us feel better that we were not the targets of this campaign, it’s cold comfort as the targeted demographic could just as easily have been a political or religious institution we do take part in.
Apple takes issue with Google’s suggestion that this offered “the capability to target and monitor the private activities of entire populations in real time.” This was, according to Apple, “stoking fear among all iPhone users that their devices had been compromised.”
Yet Google’s warning in this case seems relevant. An undetectable root exploit for current iPhones deployed via a website popular among a targeted population? That should stoke fear among all iPhone users, as it seems clear that they very well could have been compromised before now. After all, there’s no evidence this Uighur-targeted attack was the only one.
Apple points out that “when Google approached us, we were already in the process of fixing the exploited bugs.” That’s great. But who then wrote up a long technical discussion of the issue so that other security researchers, along with consumers, will be aware?
It’s a bit troubling for Apple to say that “iOS security is unmatched” during the discussion of an incredibly dangerous and powerful exploit that was apparently deployed successfully against an ethnic minority by, almost certainly, the only nation-state that has any interest in doing so. Has Apple explained to the Uighurs whose phones were invisibly and completely taken over by malicious software that it’s okay because “security is a never-ending journey”?
Had Google’s Project Zero researchers not documented this problem, we probably would never have heard about it except as an anonymous “security fixes” decimal point in our mobile operating systems.
Journey or no journey, this was a serious security failure that appears to have been successfully and maliciously exploited in the wild. Apple’s sour grapes and defensive language are out of place here, and a mea culpa would have behooved the company better.
An exposed web server storing résumés of job seekers — including from recruitment site Monster — has been found online.
The server contained résumés and CVs for job applicants spanning between 2014 and 2017, many of which included private information like phone numbers and home addresses, but also email addresses and a person’s prior work experience.
Of the documents we reviewed, most users were located in the United States.
It’s not known exactly how many files were exposed, but thousands of résumés were found in a single folder dated May 2017. Other files found on the exposed server included immigration documentation for work, which Monster does not collect.
A company statement attributed to Monster’s chief privacy officer Michael Jones said the server was owned by an unnamed recruitment customer, with which it no longer works. When pressed, the company declined to name the recruitment customer.
“The Monster Security Team was made aware of a possible exposure and notified the recruitment company of the issue,” the company said, adding the exposed server was secured shortly after it was reported in August.
Although the data is no longer accessible directly from the exposed web server, hundreds of résumés and other documents can be found in results cached by search engines.
But Monster did not warn users of the exposure, and only admitted user data was exposed after the security researcher alerted TechCrunch to the matter.
“Customers that purchase access to Monster’s data — candidate résumés and CVs — become the owners of the data and are responsible for maintaining its security,” the company said. “Because customers are the owners of this data, they are solely responsible for notifications to affected parties in the event of a breach of a customer’s database.”
Under local data breach notification laws, companies are obliged to inform state attorneys general where large numbers of users in their states are affected. Although Monster is not duty bound to disclose the exposure to regulators, some companies proactively warn their users even when third-parties are involved.
It’s not uncommon for companies to warn their users of a third-party breach. Earlier this year after hackers siphoned off millions of credit cards from the American Medical Collection Agency, a third-party payments processor, its customers — LabCorp and Quest Diagnostics — admitted to the security lapse.
Monster said that because the exposure happened on a customer system, Monster is “not in a position” to identify or confirm affected users.
“The DPC became aware of this issue through the recent media coverage and we immediately made contact with Facebook and we have asked them a series of questions. We are awaiting Facebook’s responses to those questions,” a spokeswoman for the Irish Data Protection Commission told us.
We’ve reached out to Facebook for a response.
As we reported earlier, a security research discovered an unsecured database of hundreds of millions of phone numbers linked to Facebook accounts.
The exposed server contained more than 419 million records over several databases on Facebook users from multiple countries, including 18 million records of users in the U.K.
We were able to verify a number of records in the database — including UK Facebook users’ data.
The presence of Europeans’ data in the scraped stash makes the breach a clear matter of interest to the region’s data watchdogs.
Europe’s General Data Protection Regulation (GDPR) imposes stiff penalties for compliance failures such as security breaches — with fines that can scale as high as 4% of a company’s annual turnover.
Ireland’s DPC is Facebook’s lead data protection regulator in Europe under GDPR’s one-stop shop mechanism — meaning it leads on cross-border actions, though other concerned DPAs can contribute to cases and may also chip in views on any formal outcomes that result.
The UK’s data protection watchdog, the ICO, told us it is aware of the Facebook security incident.
“We are in contact with the Irish Data Protection Commission (DPC), as they are the lead supervisory authority for Facebook Ireland Limited. The ICO will continue to liaise with the IDPC to establish the details of the incident and to determine if UK residents have been affected,” a spokeswoman said.
It’s not yet clear whether the Irish DPC will open a formal investigation into the breach of Facebook users’ phone numbers.
It does already have a large number of open investigations on its desk into Facebook and Facebook-owned businesses since GDPR’s one-stop mechanism came into force — including one into a major token security breach last year, and many, many more.
In the case of the latest security incident, it’s also not clear exactly when Facebook users phone numbers were scraped from the platform. In a response yesterday the company said the data-set is “old”, adding that it “appears to have information obtained before we made changes last year to remove people’s ability to find others using their phone numbers”.
If that’s correct, the phone number breach is likely to pre-date April 2018 — which was when Facebook announced it was making changes to its account search and recovery feature, after finding it had been abused by what it dubbed “malicious actors”.
“Given the scale and sophistication of the activity we’ve seen, we believe most people on Facebook could have had their public profile scraped in this way,” Facebook said at the time.
It would also therefore pre-date GDPR coming into force, in May 2018, so would likely fall under earlier EU data protection laws — which carry less stringent penalties.
Representatives from the Federal Bureau of Investigation, the Office of the Director of National Intelligence and the Department of Homeland Security met with counterparts at tech companies, including Facebook, Google, Microsoft and Twitter, to discuss election security, Facebook confirmed.
“The purpose was to build on previous discussions and further strengthen strategic collaboration regarding the security of the 2020 U.S. state, federal, and presidential elections,” according to a statement from Facebook head of cybersecurity policy, Nathaniel Gleicher.
First reported by Bloomberg, the meeting between America’s largest technology companies and the trio of government security agencies responsible for election security is a sign of how seriously the government and the country’s largest technology companies are treating the threat of foreign intervention into elections.
Earlier this year the Office of the Inspector General issued a report saying that the Department of Homeland Security has not done enough to safeguard elections in the United States.
Throughout the year, reports of persistent media manipulation and the dissemination of propaganda on social media platforms have cropped up not just in the United States but around the world.
In April, Facebook removed a number of accounts ahead of the Spanish election for their role in spreading misinformation about the campaign.
Companies have responded to the threat by updating different mechanisms for users to call out fake accounts and improving in-house technologies used to combat the spread of misinformation.
“Improving election security and countering information operations are complex challenges that no organization can solve alone,” said Gleicher in a statement. “Today’s meeting builds on our continuing commitment to work with industry and government partners, as well as with civil society and security experts, to better understand emerging threats and prepare for future elections.”
The company had raised $23.5 million, according to Crunchbase data. The three co-founders, Xu Zou, May Wang and Jianlin Zeng, will be joining Palo Alto after the sale is official.
With Zingbox, the company gets IoT security chops, something that is increasingly important as companies deploy internet-connected smart devices and sensors. While these tools can greatly benefit customers, they also often carry a huge security risk.
Zingbox, which was founded in 2014, gives Palo Alto a modern cloud-based solution built on a subscription model along with engineering talent to help build out the solution further. Nikesh Arora, chairman and CEO of Palo Alto Networks, certainly sees this.
“The proliferation of IoT devices in enterprises has left customers facing an enormous gap in protection against cybersecurity attacks. With the proposed acquisition of Zingbox, we will provide a first-of-its-kind subscription for our Next-Generation Firewall and Cortex platforms that gives customers the ability to gain control, visibility and security of their connected devices at scale,” Arora said in a statement.
This is the fourth security startup the company has purchased this year. It acquired two companies, nabbing PureSec and Twistlock, on the same day last Spring. Earlier this year, it bought Demisto for $560 million. All of these acquisitions are meant to build up the company’s portfolio of modern security offerings without having to build these kinds of tools in-house from scratch.
Hundreds of millions of phone numbers linked to Facebook accounts have been found online.
The exposed server contained over 419 million records over several databases on users across geographies, including 133 million records on U.S.-based Facebook users, 18 million records of users in the U.K., and another with more than 50 million records on users in Vietnam.
But because the server wasn’t protected with a password, anyone could find and access the database.
Each record contained a user’s unique Facebook ID and the phone number listed on the account. A user’s Facebook ID is typically a long, unique and public number associated with their account, which can be easily used to discern an account’s username.
But phone numbers have not been public in more than a year since Facebook restricted access to users’ phone numbers.
TechCrunch verified a number of records in the database by matching a known Facebook user’s phone number against their listed Facebook ID. We also checked other records by matching phone numbers against Facebook’s own password reset feature, which can be used to partially reveal a user’s phone number linked to their account.
Some of the records also had the user’s name, gender, and location by country.
This is the latest security lapse involving Facebook data after a string of incidents since the Cambridge Analytica scandal, which saw more than 80 million profiles scraped to help identify swing voters in the 2016 U.S. presidential election.
Since then the company has seen several high-profile scraping incidents, including at Instagram, which recently admitted to having profile data scraped in bulk.
This latest incident exposed millions of users’ phone numbers just from their Facebook IDs, putting them at risk of spam calls and SIM-swapping attacks, which relies on tricking cell carriers into giving a person’s phone number to an attacker. With someone else’s phone number, an attacker can force-reset the password on any internet account associated with that number.
Sanyam Jain, a security researcher and member of the GDI Foundation, found the database and contacted TechCrunch after he was unable to find the owner. After a review of the data, neither could we. But after we contacted the web host, the database was pulled offline.
Jain said he found profiles with phone numbers associated with several celebrities.
Facebook spokesperson Jay Nancarrow said the data had been scraped before Facebook cut off access to user phone numbers.
“This dataset is old and appears to have information obtained before we made changes last year to remove people’s ability to find others using their phone numbers,” the spokesperson said. “The dataset has been taken down and we have seen no evidence that Facebook accounts were compromised.”
But questions remain as to exactly who scraped the data, when it was scraped from Facebook, and why.
Facebook has long restricted developers access to user phone numbers. The company also made it more difficult to search for friends’ phone numbers. But the data appeared to be loaded into the exposed database at the end of last month — though that doesn’t necessarily mean the data is new.
This latest data exposure is the most recent example of data stored online and publicly without a password. Although often tied to human error rather than a malicious breach, data exposures nevertheless represent an emerging security problem.
Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.
It has been one week since U.S. border officials denied entry to a 17-year-old Harvard freshman just days before classes were set to begin.
Ismail Ajjawi, a Palestinian student living in Lebanon, had his student visa canceled and was put on a flight home shortly after arriving at Boston Logan International Airport. Customs & Border Protection officers searched his phone and decided he was ineligible for entry because of his friends’ social media posts. Ajjawi told the officers he “should not be held responsible” for others’ posts, but it was not enough for him to clear the border.
The news prompted outcry and fury. But TechCrunch has learned it was not an isolated case.
Since our story broke, we came across another case of a U.S. visa holder who was denied entry to the country on grounds that he was sent a graphic WhatsApp message. Dakhil — whose name we have changed to protect his identity — was detained for hours, but subsequently had his visa canceled. He was sent back to Pakistan and banned from entering the U.S. for five years.
Since 2015, the number of device searches has increased four-fold to over 30,200 each year. Lawmakers have accused the CBP of conducting itself unlawfully by searching devices without a warrant, but CBP says it does not need to obtain a warrant for device searches at the border. Several courts have tried to tackle the question of whether or not device searches are constitutional.
Abed Ayoub, legal and policy director at the American-Arab Anti-Discrimination Committee, told TechCrunch that device searches and subsequent denials of entry had become the “new normal.”
This is Dakhil’s story.
* * *
As a a Pakistani national, Dakhil needed a visa to enter the U.S. He obtained a B1/B2 visa, which allowed him to temporarily enter the U.S. for work and to visit family. Months later, he arrived at George Bush Intercontinental Airport in Houston, Texas, tired but excited to see his cousin for the first time in years.
It didn’t take long before Dakhil realized something wasn’t right.
Dakhil, who had never traveled to the U.S. before, was waiting in the immigration line at the border when a CBP officer approached him to ask why he had traveled to the U.S. He said it was for a vacation to visit his family. The officer took his passport and, after a brief examination of its stamps, asked why Dakhil had visited Saudi Arabia. It was for Hajj and Umrah, he said. As a Muslim, he is obliged to make the pilgrimages to Mecca at least once in his lifetime. The officer handed back his passport and Dakhil continued to wait in line.
At his turn, Dakhil approached the CBP officer in his booth, who repeated much of the same questions. But, unsatisfied with his responses, the officer took Dakhil to a small room close but separate from the main immigration hall.
“He asked me everything,” Dakhil told TechCrunch. The officer asked about his work, his travel history and how long he planned to stay in the U.S. He told the officer he planned to stay for three months with a plan to travel to Disney World in Florida and later New York City with his wife and newborn daughter, who were still waiting for visas.
The officer then rummaged through Dakhil’s carry-on luggage, pulling out his computer and other items. Then the officer took Dakhil’s phone, which he was told to unlock, and took it to another room.
For more than six hours, Dakhil was forced to sit in a bright, cold and windowless airport waiting room. There was nowhere to lie down. Others had pushed chairs together to try to sleep.
Dakhil said when the officer returned, the questioning continued. The officer demanded to know more about what he was planning to do in the U.S. One line of questioning focused on an officer’s accusation that Dakhil was planning to work at a gas station owned by his cousin — which Dakhil denied.
“I told him I had no intention to work,” he told TechCrunch. The officer continued with his line of questioning, he said, but he continued to deny that he wanted to stay or work in the U.S. “I’m quite happy back in Karachi and doing good financially,” he said.
Two more officers had entered the room and began to interrogate him as the first officer continued to search bags. At one point he pulled out a gift for his cousin — a painting with Arabic inscriptions.
But Dakhil was convinced he would be allowed entry — the officers had found nothing derogatory, he said.
“Then the officer who took my phone showed me an image,” he told TechCrunch. It was an image from 2009 of a child, who had been murdered and mutilated. Despite the graphic nature of the image, TechCrunch confirmed the photo was widely distributed on the internet and easily searchable using the name of the child’s murderer.
“I was shocked. What should I say?” he told TechCrunch, describing the panic he felt. “This image is disturbing, but you can’t control the forwarded messages,” he explained.
Dakhil told the officer that the image was sent to him in a WhatsApp group. It’s difficult to distinguish where a saved image came from on WhatsApp, because it automatically downloads received images and videos to a user’s phone. Questionable content — even from unsolicited messages — found during a border search could be enough to deny the traveler entry.
The image was used to warn parents about kidnappings and abductions of children in his native Karachi. He described it as one of those viral messages that you forward to your friends and family to warn parents about the dangers to their children. The officer pressed for details about who sent the message. Dakhil told the officer that the sender was someone he met on his Hajj pilgrimage in 2011.
“We hardly knew each other,” he said, saying they stayed in touch through WhatsApp but barely spoke.
Dakhil told the officer that the image could be easily found on the internet, but the officer was more interested in the names of the WhatsApp group members.
“You can search the image over the internet,” Dakhil told the officer. But the officer declined and said the images were his responsibility. “We found this on your cellphone,” the officer said. At one point the officer demanded to know if Dakhil was organ smuggling.
After 15 hours answering questions and waiting, the officers decided that Dakhil would be denied entry and would have his five-year visa cancelled. He was also told his family would also have their visas cancelled. The officers asked Dakhil if he wanted to claim for asylum, which he declined.
“I was treated like a criminal,” Dakhil said. “They made my life miserable.”* * *
It’s been almost nine months since Dakhil was turned away at the U.S. border.
He went back to the U.S. Embassy in Karachi twice to try to seek answers, but embassy officials said they could not reverse a CBP decision to deny a traveler entry to the United States. Frustrated but determined to know more, Dakhil asked for his records through a Freedom of Information Act (FOIA) request — which anyone can do — but had to pay hundreds of dollars for its processing.
He provided TechCrunch with the documents he obtained. One record said that Dakhil was singled out because his name matched a “rule hit,” such as a name on a watchlist or a visit to a country under sanctions or embargoes, which typically requires additional vetting before the traveler can be allowed into the U.S.
The record did not say what flagged Dakhil for additional screening, and his travel history did not include an embargoed country.
One document said CBP denied Dakhil entry to the U.S. “due to the derogatory images found on his cellphone,” and his alleged “intent to engage in unauthorized employment during his entry.” But Dakhil told TechCrunch that he vehemently denies the CBP’s allegations that he was traveling to the U.S. to work.
He said the document portrays a different version of events than what he experienced.
“They totally changed this scenario,” he said, rebutting several remarks and descriptions reported by the officers. “They only disclosed what they wanted to disclose,” he said. “They want to justify their decision, so they mentioned working in a gas station by themselves,” he claimed.
The document also said Dakhil “was permitted to view the WhatsApp group message thread on his phone and he stated that it was sent to him in September 2018,” but this was not enough to satisfy the CBP officers who ruled he should be denied entry. The document said Dakhil stated that he “never took this photo and doesn’t believe [the sender is] involved either,” but he was “advised that he was responsible for all the contents on his phone to include all media and he stated that he understood.”
The same document confirmed the contents of his phone was uploaded to the CBP’s central database and provided to the FBI’s Joint Terrorism Task Force.
Dakhil was “found inadmissible” and was put on the next flight back to Karachi, more than a day after he was first approached by the CBP officer in the immigration line.
A spokesperson for Customs & Border Protection declined to comment on individual cases, but provided a boilerplate statement.
“CBP is responsible for ensuring the safety and admissibility of the goods and people entering the United States. Applicants must demonstrate they are admissible into the U.S. by overcoming all grounds of inadmissibility including health-related grounds, criminality, security reasons, public charge, labor certification, illegal entrants and immigration violations, documentation requirements, and miscellaneous grounds,” the spokesperson said. “This individual was deemed inadmissible to the United States based on information discovered during the CBP inspection.”
CBP said it also has the right to cancel visas if a traveler is deemed inadmissible to the United States.
It’s unlikely Dakhil will return to the U.S., but he said he had hope for the Harvard student who suffered a similar fate.
“Let’s hope he can fight and make it,” he said.
There’s not a week that goes by where cybersecurity doesn’t dominates the headlines. This week was no different. Struggling to keep up? We’ve collected some of the biggest cybersecurity stories from the week to keep you in the know and up to speed.Malicious websites were used to secretly hack into iPhones for years, says Google
TechCrunch: This was the biggest iPhone security story of the year. Google researchers found a number of websites that were stealthily hacking into thousands of iPhones every week. The operation was carried out by China to target Uyghur Muslims, according to sources, and also targeted Android and Windows users. Google said it was an “indiscriminate” attack through the use of previously undisclosed so-called “zero-day” vulnerabilities.
Wired: For the second time in two years, researchers found a serious flaw in the key fobs used to unlock Tesla’s Model S cars. It’s the second time in two years that hackers have successfully cracked the fob’s encryption. Turns out the encryption key was doubled in size from the first time it was cracked. Using twice the resources, the researchers cracked the key again. The good news is that a software update can fix the issue.Microsoft’s lead EU data watchdog is looking into fresh Windows 10 privacy concerns
TechCrunch: Microsoft could be back in hot water with the Europeans after the Dutch data protection authority asked its Irish counterpart, which oversees the software giant, to investigate Windows 10 for allegedly breaking EU data protection rules. A chief complaint is that Windows 10 collects too much telemetry from its users. Microsoft made some changes after the issue was brought up for the first time in 2017, but the Irish regulator is looking at if these changes go far enough — and if users are adequately informed. Microsoft could be fined up to 4% of its global annual revenue if found to have flouted the law. Based off 2018’s figures, Microsoft could see fines as high as $4.4 billion.
The New York Times: A secret cyberattack against Iran in June but only reported this week significantly degraded Tehran’s ability to track and target oil tankers in the region. It’s one of several recent offensive operations against a foreign target by the U.S. government in recent moths. Iran’s military seized a British tanker in July in retaliation over a U.S. operation that downed an Iranian drone. According to a senior official, the strike “diminished Iran’s ability to conduct covert attacks” against tankers, but sparked concern that Iran may be able to quickly get back on its feet by fixing the vulnerability used by the Americans to shut down Iran’s operation in the first place.Apple is turning Siri audio clip review off by default and bringing it in house
TechCrunch: After Apple was caught paying contractors to review Siri queries without user permission, the technology giant said this week it will turn off human review of Siri audio by default and bringing any opt-in review in-house. That means users actively have to allow Apple staff to “grade” audio snippets made through Siri. Apple began audio grading to improve the Siri voice assistant. Amazon, Facebook, Google, and Microsoft have all been caught out using contractors to review user-generated audio.
Ars Technica: Hackers are targeting and exploiting vulnerabilities in two popular corporate virtual private network (VPN) services. Fortigate and Pulse Secure let remote employees tunnel into their corporate networks from outside the firewall. But these VPN services contain flaws which, if exploited, could let a skilled attacker tunnel into a corporate network without needing an employee’s username or password. That means they can get access to all of the internal resources on that network — potentially leading to a major data breach. News of the attacks came a month after the vulnerabilities in widely used corporate VPNs were first revealed. Thousands of vulnerable endpoints exist — months after the bugs were fixed.Grand jury indicts alleged Capital One hacker over cryptojacking claims
TechCrunch: And finally, just when you thought the Capital One breach couldn’t get any worse, it does. A federal grand jury said the accused hacker, Paige Thompson, should be indicted on new charges. The alleged hacker is said to have created a tool to detect cloud instances hosted by Amazon Web Services with misconfigured web firewalls. Using that tool, she is accused of breaking into those cloud instances and installing cryptocurrency mining software. This is known as “cryptojacking,” and relies on using computer resources to mine cryptocurrency.
In a rare feat, French police have hijacked and neutralized a massive cryptocurrency mining botnet controlling close to a million infected computers.
The notorious Retadup malware infects computers and starts mining cryptocurrency by sapping power from a computer’s processor. Although the malware was used to generate money, the malware operators easily could have run other malicious code, like spyware or ransomware. The malware also has wormable properties, allowing it to spread from computer to computer.
Since its first appearance, the cryptocurrency mining malware has spread across the world, including the U.S., Russia, and Central and South America.
According to a blog post announcing the bust, security firm Avast confirmed the operation was successful.
The security firm got involved after it discovered a design flaw in the malware’s command and control server. That flaw, if properly exploited, would have “allowed us to remove the malware from its victims’ computers” without pushing any code to victims’ computers, the researchers said.
The exploit would have dismantled the operation, but the researchers lacked the legal authority to push ahead. Because most of the malware’s infrastructure was located in France, Avast contacted French police. After receiving the go-ahead from prosecutors in July, the police went ahead with the operation to take control of the server and disinfect affected computers.
The French police called the botnet “one of the largest networks” of hijacked computers in the world.
The operation worked by secretly obtaining a snapshot of the malware’s command and control server with cooperation from its web host. The researchers said they had to work carefully as to not be noticed by the malware operators, fearing the malware operators could retaliate.
“The malware authors were mostly distributing cryptocurrency miners, making for a very good passive income,” the security company said. “But if they realized that we were about to take down Retadup in its entirety, they might’ve pushed ransomware to hundreds of thousands of computers while trying to milk their malware for some last profits.”
With a copy of the malicious command and control server in hand, the researchers built their own replica, which disinfected victim computers instead of causing infections.
“[The police] replaced the malicious [command and control] server with a prepared disinfection server that made connected instances of Retadup self-destruct,” said Avast in a blog post. “In the very first second of its activity, several thousand bots connected to it in order to fetch commands from the server. The disinfection server responded to them and disinfected them, abusing the protocol design flaw.”
In doing so, the company was able to stop the malware from operating and remove the malicious code to over 850,000 infected computers.
Jean-Dominique Nollet, head of the French police’s cyber unit, said the malware operators generated several million euros worth of cryptocurrency.
Remotely shutting down a malware botnet is a rare achievement — but difficult to carry out.
Several years ago the U.S. government revoked Rule 41, which now allows judges to issue search and seizure warrants outside of their jurisdiction. Many saw the move as an effort by the FBI to conduct remote hacking operations without being hindered by the locality of a judge’s jurisdiction. Critics argued it would set a dangerous precedent to hack into countless number of computers on a single warrant from a friendly judge.
Since then the amended rule has been used to dismantle at least one major malware operation, the so-called Joanap botnet, linked to hackers working for the North Korean regime.
A number of malicious websites used to hack into iPhones over a two-year period were targeting Uyghur Muslims, TechCrunch has learned.
Sources familiar with the matter said the websites were part of a state-backed attack — likely China — designed to target the Uyghur community in the country’s Xinjiang state.
It’s part of the latest effort by the Chinese government to crack down on the minority Muslim community in recent history. In the past year, Beijing has detained more than a million Uyghurs in internment camps, according to a United Nations human rights committee.
The websites were part of a campaign to target the religious group by infecting an iPhone with malicious code simply by visiting a booby-trapped web page. In gaining unfettered access to the iPhone’s software, an attacker could read a victim’s messages, passwords, and track their location in near-real time.
These websites had “thousands of visitors” per week for at least two years, Google said.
Victims were tricked into opening a link, which when opened would load one of the malicious websites used to infect the victim. It’s a common tactic to target phone owners with spyware.
One of the sources told TechCrunch the websites used to infect iPhones had been inadvertently indexed by Google’s search engine, prompting the FBI to alert Google to ask for the site to be removed from its index to prevent infections, they added.
A Google spokesperson would not comment beyond the published research. A FBI spokesperson said they could neither confirm nor deny any investigation, and did not comment further.
Google faced some criticism following its bombshell report for not releasing the websites used in the attacks. The researchers said the attacks were “indiscriminate watering hole attacks” with “no target discrimination,” noting that anyone visiting the site would have their iPhone hacked.
But the company would not say who was behind the attacks.
Apple did not comment. An email requesting comment to the Chinese consulate in New York was unreturned.
There’s no doubt that Apple’s self-polished reputation for privacy and security has taken a bit of a battering recently.
On the security front, Google researchers just disclosed a major flaw in the iPhone, finding a number of malicious websites that could hack into a victim’s device by exploiting a set of previously undisclosed software bugs. When visited, the sites infected iPhones with an implant designed to harvest personal data — such as location, contacts and messages.
As flaws go, it looks like a very bad one. And when security fails so spectacularly, all those shiny privacy promises naturally go straight out the window.
The implant was used to steal location data and files like databases of WhatsApp, Telegram, iMessage. So all the user messages, or emails. Copies of contacts, photos, https://t.co/AmWRpbcIHw pic.twitter.com/vUNQDo9noJ
— Lukasz Olejnik (@lukOlejnik) August 30, 2019
And while that particular cold-sweat-inducing iPhone security snafu has now been patched, it does raise questions about what else might be lurking out there. More broadly, it also tests the generally held assumption that iPhones are superior to Android devices when it comes to security.
Are we really so sure that thesis holds?
But imagine for a second you could unlink security considerations and purely focus on privacy. Wouldn’t Apple have a robust claim there?
On the surface, the notion of Apple having a stronger claim to privacy versus Google — an adtech giant that makes its money by pervasively profiling internet users, whereas Apple sells premium hardware and services (including essentially now ‘privacy as a service‘) — seems a safe (or, well, safer) assumption. Or at least, until iOS security fails spectacularly and leaks users’ privacy anyway. Then of course affected iOS users can just kiss their privacy goodbye. That’s why this is a thought experiment.
But even directly on privacy, Apple is running into problems, too.
To wit: Siri, its nearly decade-old voice assistant technology, now sits under a penetrating spotlight — having been revealed to contain a not-so-private ‘mechanical turk’ layer of actual humans paid to listen to the stuff people tell it. (Or indeed the personal stuff Siri accidentally records.)
A hacker has broken into Jack Dorsey’s own Twitter account.
It’s not clear how it happened, but the hacker posted over a dozen tweets in quick succession, including racial epithets. Not only that, it means the unnamed hacker also has access to the Twitter chief executive’s private direct messages.
Twitter allows users to secure their accounts with two-factor authentication. Facebook boss Mark Zuckerberg once had his Twitter account hacked because his account didn’t use the secondary security feature.
We’ve reached out to Twitter for more but did not immediately hear back.
Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws.
Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what was described as a “indiscriminate” attack.
“Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero.
He said the websites had been hacking iPhones over a “period of at least two years.”
The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker can gain access to the device’s full range of features normally off-limits to the user. That means an attacker can quietly install malicious apps can be installed to spy on an iPhone owner without their knowledge or consent.
Google said based off their analysis, the vulnerabilities were used to steal a user’s photos and messages, and track their location in near-realtime. The “implant” could also access the user’s on-device bank of saved passwords.
The vulnerabilities affect iOS 10 through to the current iOS 12 software version.
Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities.
Apple issued a fix six days later with iOS 12.1.4 for iPhone 5s and iPad Air and later.
Beer said it’s possible other hacking campaigns currently in action.
The iPhone and iPad maker has a good rap on security and privacy matters. Recently the company increased its maximum bug bounty payout to $1 million for security researchers who find flaws that can silently target an iPhone and gain root-level privileges without any user interaction. Under Apple’s new bounty rules — set to go into effect later this year — Google would’ve been eligible for several million dollars in bounties.
A spokesperson for Apple did not immediately comment.