The UK’s next prime minister must prioritize a decision on whether or not to allow Chinese tech giant Huawei to be a 5G supplier, a parliamentary committee has urged — warning that the country’s international relations are being “seriously damaged” by ongoing delay.
In a statement on 5G suppliers, the Intelligence and Security committee (ISC) writes that the government must take a decision “as a matter of urgency”.
Earlier this week another parliamentary committee, which focuses on science and technology, concluded there is no technical reason to exclude Huawei as a 5G supplier, despite security concerns attached to the company’s ties to the Chinese state, though it did recommend it be excluded from core 5G supply.
The delay in the UK settling on a 5G supplier policy can be linked not only to the complexities of trying to weight and balance security considers with geopolitical pressures but also ongoing turmoil in domestic politics, following the 2016 EU referendum Brexit vote — which continues to suck most of the political oxygen out of Westminster. (And will very soon have despatched two UK prime ministers in three years.)
Outgoing PM Theresa May, whose successor is due to be selected by a vote by Conservative Party members next week, appeared to be leaning towards giving Huawei an amber light earlier this year.
A leak to the press from a National Security Council meeting back in April suggested Huawei would be allowed to provide kit but only for non-core parts of 5G networks — raising questions about how core and non-core are delineated in the next-gen networks.
The leak led to the sacking by May of the then defense minister, Gavin Williamson, after an investigation into confidential information being passed to the media in which she said she had lost confidence in him.
The publication of a government Telecoms Supply Chain Review, whose terms of reference were published last fall, has also been delayed — leading to carriers to press the government for greater clarity last month.
But with May herself now on the way out, having agreed to step down as PM back in May, the decision on 5G supply is on hold.
It will be down to either Boris Johnson or Jeremy Hunt, the two remaining contenders to take over as PM, to choose whether or not to let the Chinese tech giant supply UK 5G networks.
Whichever of the men wins the vote they will arrive in the top job needing to give their full attention to finding a way out of the Brexit morass — with a mere three months til a October 31 Brexit extension deadline looming. So there’s a risk 5G may not seem as urgent an issue and a decision again be kicked back.
In its statement on 5G supply, the ISC backs the view expressed by the public-facing branch of the UK’s intelligence service that network security is not dependent on any one supplier being excluded from building it — writing that: “The National Cyber Security Centre… has been clear that the security of the UK’s telecommunications network is not about one company or one country: the ‘flag of origin’ for telecommunications equipment is not the critical element in determining cyber security.”
The committee argues that “some parts of the network will require greater protection” — writing that “critical functions cannot be put at risk” but also that there are “less sensitive functions where more risk can be carried”, albeit without specifying what those latter functions might be.
“It is this distinction — between the sensitivity of the functions — that must determine security, rather than where in the network those functions are located: notions of ‘core’ and ‘edge’ ate therefore misleading in this context,” it adds. “We should therefore be thinking of different levels of security, rather than a one size fits all approach, within a network that has been built to be resilient to attack, such that no single action could disable the system.”
The committee’s statement also backs the view that the best way to achieve network resilience is to support diversity in the supply chain — i.e. by supporting more competition.
But at the same time it emphasizes that the 5G supply decision “cannot be viewed solely through a technical lens — because it is not simply a decision about telecommunications equipment”.
“This is a geostrategic decision, the ramifications of which may be felt for decades to come,” it warns, raising concerns about the perceptions of UK intelligence sharing partners by emphasizing the need for those allies to trust the decisions the government makes.
It also couches a UK decision to give Huawei access a risk by suggesting it could be viewed externally as an endorsement of the company, thereby encouraging other countries to follow suit — without paying the full (and it asserts vitally) necessary attention to the security piece.
“The UK is a world leader in cyber security: therefore if we allow Huawei into our 5G network we must be careful that that is not seen as an endorsement for others to follow. Such a decision can only happen where the network itself will be constructed securely and with stringent regulation,” it writes.
The committee’s statement goes on to raise as a matter of concern the UK’s general reliance on China as a technology supplier.
“One of the lessons the UK Government must learn from the current debate over 5G is that with the technology sector now monopolised by such a few key players, we are over-reliant on Chinese technology — and we are not alone in this, this is a global issue. We need to consider how we can create greater diversity in the market. This will require us to take a long term view — but we need to start now,” it warns.
It ends by reiterating that the debate about 5G supply has been “unnecessarily protracted” — pressing the next UK prime minister to get on and take a decision “so that all concerned can move forward”.
CrowdStrike, a cybersecurity business focused on endpoint protection, posted revenues of $96.1 million on GAAP net losses of $26 million in the first quarter of fiscal year 2020, according to the company’s first-ever earnings report released Thursday following its $612 million NASDAQ initial public offering in June.
CrowdStrike closed up 2.5% Thursday following the news, rising in after-hours trading.
The company’s revenue shot up 103% from the same period last year, with subscription revenue increasing 116% increase to $86 million. CrowdStrike’s stock price has continued to rise since the company priced its shares at $35 apiece last month, trading Thursday at nearly $82 per share after-hours.
The security enterprise expects full-year losses of 72 to 70 cents per share on more than $430 million in revenue.
“We are pleased with the strong start to the year,” CrowdStrike chief executive officer and co-founder George Kurtz said in a statement. “As the pioneer of cloud-native endpoint security, CrowdStrike provides the only endpoint protection platform built from the ground up to stop breaches, while reducing security sprawl with its single-agent architecture. Our continued innovation strengthens our category leadership in the Security Cloud and positions us as the fundamental endpoint platform for the future.”
Kurtz began work on Sunnyvale-based CrowdStrike in 2012 after beginning his career as a CPA at Price Waterhouse, writing a book on internet security titled “Hacking Exposed: Network Security Secrets & Solutions,” then launching FoundStone, which sold to McAfee in 2004 in a deal worth $86 million. Kurtz then spent the next seven years as a general manager at McAfee, eventually becoming chief technology officer.
We chatted briefly with the CrowdStrike chief just after he rang the Nasdaq opening bell last month. You can read the full conversation here.
Team Telecom, a shadowy US national security unit tasked with protecting America’s telecommunications systems, is delaying plans by Google, Facebook and other tech companies for the next generation of international fiber optic cables.
Team Telecom is comprised of representatives from the departments of Defense, Homeland Security, and Justice (including the FBI), who assess foreign investments in American telecom infrastructure, with a focus on cybersecurity and surveillance vulnerabilities.
Team Telecom works at a notoriously sluggish pace, taking over seven years to decide that letting China Mobile operate in the US would “raise substantial and serious national security and law enforcement risks,” for instance. And while Team Telecom is working, applications are stalled at the FCC.
The on-going delays to submarine cable projects, which can cost nearly half a billion dollars each, come with significant financial impacts. They also cede advantage to connectivity projects that have not attracted Team Telecom’s attention – such as the nascent internet satellite mega-constellations from SpaceX, OneWeb and Amazon .
Team Telecom’s investigations have long been a source of tension within Silicon Valley. Google’s subsidiary GU Holdings Inc has been building a network of international submarine fiber-optic cables for over a decade. Every cable that lands on US soil is subject to Team Telecom review, and each one has faced delays and restrictions.
In just a few weeks Apple’s new iOS 13, the thirteenth major iteration of its popular iPhone software, will be out — along with new iPhones and a new iPad version, the aptly named iPadOS. We’ve taken iOS 13 for a spin over the past few weeks — with a focus on the new security and privacy features — to see what’s new and how it all works.
Here’s what you need to know.You’ll start to see reminders about apps that track your location
Ever wonder which apps track your location? Wonder no more. iOS 13 will periodically remind you about apps that are tracking your location in the background. Every so often it will tell you how many times an app has tracked where you’ve been in a recent period of time, along with a small map of the location points. From this screen you can “always allow” the app to track your location or have the option to limit the tracking.You can grant an app your location just once
To give you more control over what data have access to, iOS 13 now lets you give apps access to your location just once. Previously there was “always,” “never” or “while using,” meaning an app could be collecting your real-time location as you’re using it. Now you can grant an app access on a per use basis — particularly helpful for the privacy-minded folks.And apps wanting access to Bluetooth can be declined access
Apps wanting to access Bluetooth will also ask for your consent. Although apps can use Bluetooth to connect to gadgets, like fitness bands and watches, Bluetooth-enabled tracking devices known as beacons can be used to monitor your whereabouts. These beacons are found everywhere — from stores to shopping malls. They can grab your device’s unique Bluetooth identifier and track your physical location between places, building up a picture of where you go and what you do — often for targeting you with ads. Blocking Bluetooth connections from apps that clearly don’t need it will help protect your privacy.Find My gets a new name — and offline tracking
Find My, the new app name for locating your friends and lost devices, now comes with offline tracking. If you lost your laptop, you’d rely on its last Wi-Fi connected location. Now it broadcasts its location using Bluetooth, which is securely uploaded to Apple’s servers using nearby cellular-connected iPhones and other Apple devices. The location data is cryptographically scrambled and anonymized to prevent anyone other than the device owner — including Apple — from tracking your lost devices.Your apps will no longer be able to snoop on your contacts’ notes
Another area that Apple is trying to button down is your contacts. Apps have to ask for your permission before they can access to your contacts. But in doing so they were also able to access the personal notes you wrote on each contact, like their home alarm code or a PIN number for phone banking, for example. Now, apps will no longer be able to see what’s in each “notes” field in a user’s contacts.Sign In With Apple lets you use a fake relay email address
This is one of the cooler features coming soon — Apple’s new sign-in option allows users to sign in to apps and services with one tap, and without having to turn over any sensitive or private information. Any app that requires a sign-in option must use Sign In With Apple as an option. In doing so users can choose to share their email with the app maker, or choose a private “relay” email, which hides a user’s real email address so the app only sees a unique Apple-generated email instead. Apple says it doesn’t collect users’ data, making it a more privacy-minded solution. It works across all devices, including Android devices and websites.You can silence unknown callers
Here’s one way you can cut down on disruptive spam calls: iOS 13 will let you send unknown callers straight to voicemail. This catches anyone who’s not in your contacts list will be considered an unknown caller.You can strip location metadata from your photos
Every time you take a photo your iPhone stores the precise location of where the photo was taken as metadata in the photo file. But that can reveal sensitive or private locations — such as your home or office — if you share those photos on social media or other platforms, many of which don’t strip the data when they’re uploaded. Now you can. With a few taps, you can remove the location data from a photo before sharing it.And Safari gets better anti-tracking features
Apple continues to advance its new anti-tracking technologies in its native Safari browser, like preventing cross-site tracking and browser fingerprinting. These features make it far more difficult for ads to track users across the web. iOS 13 has its cross-site tracking technology enabled by default so users are protected from the very beginning.
- iOS 13 will let you limit app location access to ‘just once’
- Apple may combine ‘Find My iPhone’ & ‘Find My Friends’ apps, launch a Tile-like tracking device
- With iOS 13, Apple locks out apps from accessing users’ private notes in Contacts
- Apple introduces ‘Sign in with Apple’ to help protect your privacy
- Answers to your burning questions about how ‘Sign In with Apple’ works
- How to stop robocalls spamming your phone
- Apple has a plan to make online ads more private
The rise of data breaches, along with an expanding raft of regulations (now numbering 80 different regional regimes, and growing) have thrust data protection — having legal and compliant ways of handling personal user information — to the top of the list of things that an organization needs to consider when building and operating their businesses. Now a startup called InCountry, which is building both the infrastructure for these companies to securely store that personal data in each jurisdiction, as well as a comprehensive policy framework for them to follow, has raised a Series A of $15 million. The funding is coming in just three months after closing its seed round — underscoring both the attention this area is getting and the opportunity ahead.
The funding is being led by three investors: Arbor Ventures of Singapore, Global Founders Capital of Berlin, and Mubadala of Abu Dhabi. Previous investors Caffeinated Capital, Felicis Ventures, Charles River Ventures, and Team Builder Ventures (along with others that are not being named) also participated. It brings the total raised to date to $21 million.
Peter Yared, the CEO and founder, pointed out in an interview the geographic diversity of the three lead backers: he described this as a strategic investment, which has resulted from InCountry already expanding its work in each region. (As one example, he pointed out a new law in the UAE requiring all health data of its citizens to be stored in the country — regardless of where it originated.)
As a result, the startup will be opening offices in each of the regions and launching a new product, InCountry Border, to focus on encryption and data handling that keep data inside specific jurisdictions. This will sit alongside the company’s compliance consultancy as well as its infrastructure business.
“We’re only 28 people and only six months old,” Yared said. “But the proposition we offer — requiring no code changes, but allowing companies to automatically pull out and store the personally identifiable information in a separate place, without anything needed on their own back end, has been a strong pull. We’re flabbergasted with the meetings we’ve been getting.” (The alternative, of companies storing this information themselves, has become massively unpalatable, given all the data breaches we’ve seen, he pointed out.)
In part because of the nature of data protection, in its short six months of life, InCountry has already come out of the gates with a global viewpoint and global remit.
It’s already active in 65 countries — which means it’s already equipped to stores, processes, and regulates profile data in the country of origin in these markets — but that is actually just the tip of the iceberg. The company points out that more than 80 countries around the world have data sovereignty regulations, and that in the US, some 25 states already have data privacy laws. Violating these can have disastrous consequences for a company’s reputation, not to mention its bottom line: In Europe, earlier this month the UK data regulator is now fining companies the equivalent of hundreds of millions of dollars when they violate GDPR rules.
This ironically is translating into a big business opportunity for startups that are building technology to help companies cope with this. Just last week, OneTrust raised a $200 million Series A to continue building out its technology and business funnel — the company is a “gateway” specialist, building the welcome screens that you encounter when you visit sites to accept or reject a set of cookies and other data requests.
Yared says that while InCountry is very young and is still working on its channel strategy — it’s mainly working directly with companies at this point — there is a clear opportunity both to partner with others within the ecosystem as well as integrators and others working on cloud services and security to build bigger customer networks.
That speaks to the complexity of the issue, and the different entry points that exist to solve it.
“The rapidly evolving and complex global regulatory landscape in our technology driven world is a growing challenge for companies,” said Melissa Guzy of Arbor Ventures, in a statement. Guzy is joining the board with this round. “InCountry is the first to provide a comprehensive solution in the cloud that enables companies to operate globally and address data sovereignty. We’re thrilled to partner and support the company’s mission to enable global data compliance for international businesses.”
Bug hunting can be a lucrative gig. Depending on the company, a serious bug reported through the proper channels can earn whoever found it first tens of thousands of dollars.
Google launched a bug bounty program for Chrome in 2010. Today, they’re increasing the maximum rewards for that program by 2-3x.
Rewards in Chrome’s bug bounty program vary considerably based on how severe a bug is and how detailed your report is — a “baseline” report with fewer details will generally earn less than a “high-quality” report that does things like explain how a bug might be exploited, why it’s happening and how it might be fixed. You can read about how Google rates reports right here.
But in both cases, the potential reward size is being increased. The maximum payout for a baseline report is increasing from $5,000 to $15,000, while the maximum payout for a high-quality report is being bumped from $15,000 to $30,000.
There’s one type of exploit that Google is particularly interested in: those that compromise a Chromebook or Chromebox device running in guest mode, and that aren’t fixed with a quick reboot. Google first offered a $50,000 reward for this type of bug, increasing it to $100,000 in 2016 after no one had managed to claim it. Today they’re bumping it to $150,000.
They’ve also introduced a new exploit category for Chrome OS rewards: lockscreen bypasses. If you can get around the lockscreen (by pulling information out of a locked user session, for example,) Google will pay out up to $15,000.
Google pays additional rewards for any bugs found using its “Chrome Fuzzer Program” — a program that lets researchers write automated tests and run them on lots and lots of machines in the hopes of finding a bug that only shows up at much larger scales. The bonus for bugs found through the Fuzzer program will be increased from $500 to $1,000 (on top of whatever reward you’d normally get for a bug in that category).
Google says that it has paid out more than $5 million in bug bounties through its Chrome Vulnerability Rewards Program since it was introduced in 2010. As of February of this year, the company had paid out over $15 million across all of their bug bounty programs.
In 2015, the company said it was hit by hackers who gained access to its user profile database, including their scrambled passwords. But the hackers inserted code that scraped the user’s plaintext password as it was entered by users at the time.
Slack said it was recently contacted through its bug bounty about a list of allegedly compromised Slack account passwords. The company believes the case may relate to the 2015 data breach incident.
Slack said the security incident does not apply to “the approximately 99% who joined Slack after March 2015” or those who changed their password since.
Accounts that require single sign-on through a company’s network are not affected.
The company also said it has no reason to believe accounts were compromised but provided no evidence for its claim.
Slack said 1% of accounts in 2015 were affected by the breach. An earlier report suggested that the figure may amount to 65,000 accounts. When reached, a Slack spokesperson would not comment further nor confirm the figure.
Slack recently debuted on the New York Stock Exchange, valuing the company at about $15.7 billion.
Microsoft said it has notified close to 10,000 people in the past year that they have been targeted by state-sponsored hackers.
The tech giant said Wednesday that the victims were either targeted or compromised by hackers working for a foreign government. In almost all cases, Microsoft said, enterprise customers were the primary targets — such as businesses and corporations. About one in 10 victims are consumer personal accounts, the company said.
Microsoft said its new data, revealed at the Aspen Security Forum in Colorado, demonstrates the “significant extent to which nation-states continue to rely on cyberattacks as a tool to gain intelligence, influence geopolitics, or achieve other objectives.”
On top of that the company also said it has made 781 notifications of state-sponsored attacks on organizations using its AccountGuard technology, designed for political campaigns, parties and government institutions.
Almost all of the attacks targeted U.S.-based organizations, the company said, but a spokesperson would not disclose the percentage of successful attacks.
Most of the attacks were traced back to activity by hacking groups believed to be associated with Russia, North Korea and Iran.
One such group, the so-called APT 33 group operating out of Iran — which Microsoft calls Holmium — has been in Microsoft’s cross-hairs before. In March the company said the Tehran-backed hackers stole corporate secrets and destroyed data in a two-year-long hacking campaign. Weeks later the company sued to obtain a restraining order for another Iranian hacker group, APT 35, or Phosphorus. A year earlier it took similar legal action against Russian hackers, known as APT 28, or Fancy Bear, which was blamed for disrupting the 2016 presidential election.
“Cyberattacks continue to be a significant tool and weapon wielded in cyberspace. In some instances, those attacks appear to be related to ongoing efforts to attack the democratic process,” said Microsoft’s customer security chief Tom Burt in a blog post.
Microsoft said it expects to see the “use of cyberattacks to specifically target democratic processes” ahead of the upcoming 2020 presidential election.
The idea behind Dust Identity was originally born in an MIT lab where students developed a system of uniquely identifying objects using diamond dust. Since then, the startup has been working to create a commercial application for the advanced technology, and today it announced a $10 million Series A round led by Kleiner Perkins, which also led its $2.3 million seed round last year.
Airbus Ventures and Lockheed Martin Ventures, New Science Ventures, Angular Ventures and Castle Island Ventures also participated in the round. Today’s investment brings the total raised to $12.3 million.
The company has an unusual idea of applying a thin layer of diamond dust to an object with the goal of proving that object has not been tampered with. While using diamond dust may sound expensive, the company told TechCrunch last year at the time of its seed round funding that it uses low-cost industrial diamond waste, rather than the expensive variety you find in jewelry stores.
As CEO and co-founder Ophir Gaathon told TechCrunch last year, “Once the diamonds fall on the surface of a polymer epoxy, and that polymer cures, the diamonds are fixed in their position, fixed in their orientation, and it’s actually the orientation of those diamonds that we developed a technology that allows us to read those angles very quickly.”
Ilya Fushman, who is leading the investment for Kleiner, says the company is offering a unique approach to identity and security for objects. “At a time when there is a growing trust gap between manufacturers and suppliers, Dust Identity’s diamond particle tag provides a better solution for product authentication and supply chain security than existing technologies,” he said in a statement.
The presence of strategic investors Airbus and Lockheed Martin shows that big industrial companies see a need for advanced technology like this in the supply chain. It’s worth noting that the company partnered with enterprise computing giant SAP last year to provide a blockchain interface for physical objects, where they store the Dust Identity identifier on the blockchain. Although, the startup has a relationship with SAP, it remains blockchain agnostic, according to a company spokesperson.
While it’s still early days for the company, it has attracted the attention from a broad range of investors and intends to use the funding to continue building and expanding the product in the coming year. To this point, it has implemented pilot programs and early deployments across a range of industries including automotive, luxury goods, cosmetics and oil, gas and utilities
Startup founders typically face a management challenge. They often began their careers in technical engineering jobs, and are thrust into the CEO role when starting a company. Sometimes it makes sense to bring in a more experienced executive to guide a fast-growing startup, and that is what Snyk announced it’s doing today, shifting founder/CEO Guy Podjarny to president and chairman of the board, while bringing in board member and investor Peter McKay as CEO.
Over the past 18 months the company has grown significantly moving from just 18 employees to 150 as its open source software development approach to security has taken hold in the marketplace. McKay is someone who makes sense for the job given he has been involved with the company as an investor since its early days, and has known Podjarny in various roles for 15 years. The two talked about having a good working relationship, something that Podjarny said was essential to this transition.
“I think I would be going through many sleepless nights if I was bringing just somebody we interviewed into the company for a role like this at a time like this,” he said. He added that having known and worked with McKay for so long has helped ease the role changes.
As important as the working relationship between the two is going to be, McKay brings an executive pedigree that includes stints as co-CEO at Veeam and general manager of Americas at VMware, where he managed an operation with $4 billion in annual revenue.
McKay says that he and Podjarny have had many conversations about how they will handle their new roles moving forward. “Guy and I have spent a great deal of time talking through a lot of [issues] before we ever said that we were going to move forward with this change,” he said. He added, “We wanted to make sure we’re aligned on how we would handle decisions. We want to be aligned on how we handle things like diversity, how we handle things like empowering and core company values,” he said.
As for Podjarny, he says this move allows him to return to a more technical function, and the two will take advantage of each other’s strengths as they move into these new roles. “Peter brings in extensive large-scale management experience, experience with markets. This is experience that I don’t have, but which naturally complements my product vision and community leadership skills,” he explained.
As a startup grows, picking the right leader to guide the company into the future is a tricky decision, and one that Podjarny and McKay did not take lightly. In spite of their long relationship, they recognize there will be challenges ahead as company founder and board member/investor take on new roles, but they believe that this is the best decision for the company to develop and grow moving forward. Time will tell if they are right.
Another clinical lab ensnared in the AMCA data breach has come forward.
Clinical Pathology Laboratories (CPL) says 2.2 million patients may have had their names, addresses, phone numbers, dates of birth, dates of service, balance information and treatment provider information stolen in the previously-reported breach.
Another 34,500 patients had their credit card or banking information compromised.
The breach was limited to U.S. residents, the company said.
CPL blamed the AMCA, which it and other labs used to process payments for their patients, for not providing more details on the breach when it was disclosed in June.
“At the time of AMCA’s initial notification, AMCA did not provide CPL with enough information for CPL to identify potentially affected patients or confirm the nature of patient information potentially involved in the incident, and CPL’s investigation is on-going,” said the company in a statement.
Then, AMCA filed for bankruptcy protection amid several class action suits.
Several lawmakers have since contacted both Quest and LabCorp, two of the biggest laboratories in the U.S. to demand answers about the breach and why it went undetected for close to a year.
- 7.7 million LabCorp records stolen in same hack affecting Quest
- Quest Diagnostics says 11.9 million patients affected by data breach
- An unsecured SMS spam operation doxxed its owners
- Samsung spilled SmartThings app source code and secret keys
- Security lapse exposed a Chinese smart city surveillance system
- A leaky database of SMS text messages exposed password resets and two-factor codes
A security lapse at a hotel management startup has exposed hotel bookings and guests’ personal information.
The security lapse was resolved Monday after TechCrunch reached out to Aavgo, a hospitality tech company based in San Francisco, which secured a server it had left online without a password.
The server was open for three weeks — long enough for security researcher Daniel Brown to find the database.
He shared his findings exclusively with TechCrunch, then published them.
Aavgo bills itself as a way for hotels to organize their operations by using several connected apps — one for use by guests using tablets installed in their hotel rooms for entertainment, ordering room service and checking out, and another for staff to communicate with each other, file maintenance tickets and manage housekeeping.
Several large hotel chains, including Holiday Inn Express and Zenique Hotels, use Aavgo’s technology in their properties.
The database contained daily updating logs of the back-end computer system. Although most of the records were logs of computer commands critical to the running of the system, we found within personal booking data — including names, email addresses, phone numbers, room types, prices, the location of the hotel and the room and the dates and times of check-in and check-out.
There was no financial information in the database beyond the credit card issuer.
The database also contained room service orders, guest complaints, invoices and other sensitive information used for accessing the Aavgo system, the researcher said.
Many of the records were related to its corporate hotelier customers.
One of those customers included Guestline, a property management company for hoteliers, which uses Aavgo as the underlying technology for its property management system. Guestline says it facilities 6.3 million bookings a year.
When reached, Guestline’s data protection officer James Padkin said data protection is of “paramount importance” and the company has “ceased our very limited trial of the AavGo housekeeping app.”
After the company failed to respond to the researcher’s initial email, Aavgo shut down the database a few hours after TechCrunch made contact with its chief executive, Mrunal Desai.
“We had no data breach; however, we did find a vulnerability,” said Desai. He said data on 300 hotel rooms was exposed. Brown said based on his review of the data, however, that the number is likely higher. Desai added that the company has “already started informing our customers about this vulnerability.”
Midway during our correspondence, Desai copied the company’s outside counsel, a Texas-based law firm, which threatened “immediate legal action” ahead of publishing this report.
Aavgo becomes the latest hospitality company embroiled in a hotel-related security incident in recent years.
In 2017, hotel booking service Sabre confirmed a seven-months long data breach of its SynXis reservation system, affecting more than 36,000 hotels globally and millions of credit cards.
A year later, Marriott-owned Starwood admitted a breach that affected up to 383 million hotel guests around the world. Earlier this month U.K. authorities said they would fine the company $123 million for the breach under the new GDPR regime, which affected about 30 million customers in the European Union.
- Millions of Venmo transactions scraped in warning over privacy settings
- An unsecured SMS spam operation doxxed its owners
- Samsung spilled SmartThings app source code and secret keys
- Security lapse exposed a Chinese smart city surveillance system
- A leaky database of SMS text messages exposed password resets and two-factor codes
- We found a massive spam operation — and sunk its server
- Dow Jones’ watchlist of 2.4 million high-risk individuals has leaked
- Robocaller firm Stratics Networks exposed millions of call recordings
- Massive mortgage and loan data leak gets worse as original documents also exposed
A UK parliamentary committee has concluded there are no technical grounds for excluding Chinese network kit vendor Huawei from the country’s 5G networks.
In a letter from the chair of the Science & Technology Committee to the UK’s digital minister Jeremy Wright, the committee says: “We have found no evidence from our work to suggest that the complete exclusion of Huawei from the UK’s telecommunications networks would, from a technical point of view, constitute a proportionate response to the potential security threat posed by foreign suppliers.”
Though the committee does go on to recommend the government mandate the exclusion of Huawei from the core of 5G networks, noting that UK mobile network operators have “mostly” done so already — but on a voluntary basis.
If it places a formal requirement on operators not to use Huawei for core supply the committee urges the government to provide “clear criteria” for the exclusion so that it could be applied to other suppliers in future.
Reached for a response to the recommendations, a government spokesperson told us: “The security and resilience of the UK’s telecoms networks is of paramount importance. We have robust procedures in place to manage risks to national security and are committed to the highest possible security standards.”
The spokesperson for the Department for Digital, Media, Culture and Sport added: “The Telecoms Supply Chain Review will be announced in due course. We have been clear throughout the process that all network operators will need to comply with the Government’s decision.”
In recent years the US administration has been putting pressure on allies around the world to entirely exclude Huawei from 5G networks — claiming the Chinese company poses a national security risk.
Australia announced it was banning Huawei and another Chinese vendor ZTE from providing kit for its 5G networks last year. Though in Europe there has not been a rush to follow the US lead and slam the door on Chinese tech giants.
In April leaked information from a UK Cabinet meeting suggested the government had settled on a policy of granting Huawei access as a supplier for some non-core parts of domestic 5G networks, while requiring they be excluded from supplying components for use in network cores.
On this somewhat fuzzy issue of delineating core vs non-core elements of 5G networks, the committee writes that it “heard unanimously and clearly” from witnesses that there will still be a distinction between the two in the next-gen networks.
It also cites testimony by the technical director of the UK’s National Cyber Security Centre (NCSC), Dr Ian Levy, who told it “geography matters in 5G”, and pointed out Australia and the UK have very different “laydowns” — meaning “we may have exactly the same technical understanding, but come to very different conclusions”.
In a response statement to the committee’s letter, Huawei SVP Victor Zhang welcomed the committee’s “key conclusion” before going on to take a thinly veiled swiped at the US — writing: “We are reassured that the UK, unlike others, is taking an evidence based approach to network security. Huawei complies with the laws and regulations in all the markets where we operate.”
The committee’s assessment is not all comfortable reading for Huawei, though, with the letter also flagging the damning conclusions of the most recent Huawei Oversight Board report which found “serious and systematic defects” in its software engineering and cyber security competence — and urging the government to monitor Huawei’s response to the raised security concerns, and to “be prepared to act to restrict the use of Huawei equipment if progress is unsatisfactory”.
Huawei has previously pledged to spend $2BN addressing security shortcomings related to its UK business — a figure it was forced to qualify as an “initial budget” after that same Oversight Board report.
“It is clear that Huawei must improve the standard of its cybersecurity,” the committee warns.
It also suggests the government consults on whether telecoms regulator Ofcom needs stronger powers to be able to force network suppliers to clean up their security act, writing that: “While it is reassuring to hear that network operators share this point of view and are ready to use commercial pressure to encourage this, there is currently limited regulatory power to enforce this.”
Another committee recommendation is for the NCSC to be consulted on whether similar security evaluation mechanisms should be established for other 5G vendors — such as Ericsson and Nokia: Two European based kit vendors which, unlike Huawei, are expected to be supplying core 5G.
“It is worth noting that an assurance system comparable to the Huawei Cyber Security Evaluation Centre does not exist for other vendors. The shortcomings in Huawei’s cyber security reported by the Centre cannot therefore be directly compared to the cyber security of other vendors,” it notes.
On the issue of 5G security generally the committee dubs this “critical”, adding that “all steps must be taken to ensure that the risks are as low as reasonably possible”.
Where “essential services” that make use of 5G networks are concerned, the committee says witnesses were clear such services must be able to continue to operate safely even if the network connection is disrupted. Government must ensure measures are put in place to safeguard operation in the event of cyber attacks, floods, power cuts and other comparable events, it adds.
While the committee concludes there is no technical reason to limit Huawei’s access to UK 5G, the letter does make a point of highlighting other considerations, most notably human rights abuses, emphasizing its conclusion does not factor them in at all — and pointing out: “There may well be geopolitical or ethical grounds… to enact a ban on Huawei’s equipment”.
It adds that Huawei’s global cyber security and privacy officer, John Suffolk, confirmed that a third party had supplied Huawei services to Xinjiang’s Public Security Bureau, despite Huawei forbidding its own employees from misusing IT and comms tech to carry out surveillance of users.
The committee suggests Huawei technology may therefore be being used to “permit the appalling treatment of Muslims in Western China”.
Old bot, new tricks.
TrickBot, a financially motivated malware in wide circulation, has been observed infecting victims’ computers to steal email passwords and address books to spread malicious emails from their compromised email accounts.
The TrickBot malware was first spotted in 2016 but has since developed new capabilities and techniques to spread and invade computers in an effort to grab passwords and credentials — eventually with an eye on stealing money. It’s highly adaptable and modular, allowing its creators to add in new components. In the past few months it’s adapted for tax season to try to steal tax documents for making fraudulent returns. More recently the malware gained cookie stealing capabilities, allowing attackers to log in as their victims without needing their passwords.
With these new spamming capabilities, the malware — which researchers are calling “TrickBooster” — sends malicious from a victim’s account then removes the sent messages from both the outbox and the sent items folders to avoid detection.
Researchers at cybersecurity firm Deep Instinct, who found the servers running the malware spamming campaign, say they have evidence that the malware has collected more than 250 million email addresses to date. Aside from the massive amounts of Gmail, Yahoo, and Hotmail accounts, the researchers say several U.S. government departments and other foreign governments — like the U.K. and Canada — had emails and credentials collected by the malware.
“Based on the organizations affected it makes a lot of sense to get as widely spread as possible and harvest as many emails as possible,” Guy Caspi, chief executive of Deep Instinct, told TechCrunch. “If I were to land on an end point in the U.S. State department, I would try to spread as much as I can and collect any address or credential possible.”
If a victim’s computer is already infected with TrickBot, it can download the certificate-signed TrickBooster component, which sends lists of the victim’s email addresses and address books back to the main server, then begins its spamming operating from the victim’s computer.
The malware uses a forged certificates to sign the component to help evade detection, said Caspi. Many of the certificates were issued in the name of legitimate businesses with no need to sign code, like heating or plumbing firms, he said.
The researchers first spotted TrickBooster on June 25 and was reported to the issuing certificate authorities a week later which revoked the certificates, making it more difficult for the malware to operate.
After identifying the command and control servers, the researchers obtained and downloaded the 250 million cache of emails. Caspi said the server was unprotected but “hard to access and communicate with” due to connectivity issues.
The researchers described TrickBooster as a “powerful addition to TrickBot’s vast arsenal of tools,” given its ability to move stealthily and evade detection by most antimalware vendors, they said.
T-Mobile has reported a small decline in the number of government data requests it receives, according to its latest transparency report, quietly published this week.
The third-largest cell giant in the U.S. reported 459,989 requests during 2018, down by a little over 1% on the year earlier. That includes an overall drop in subpoenas, court orders and pen registers and trap and trace devices used to record the incoming and outgoing callers; however, the number of search warrants issued went up by 27% and wiretaps increased by almost 3%.
The company rejected 85,201 requests, an increase of 7% on the year prior.
But the number of requests for historical call detail records and cell site information, which can be used to infer a subscriber’s location, has risen significantly.
For 2018, the company received 70,224 demands for historical call data, up by more than 9% on the year earlier.
Historical cell site location data allows law enforcement to understand which cell towers carried a call, text message or data, and therefore a subscriber’s historical real-time location at any given particular time. Last year the U.S. Supreme Court ruled that this data was protected and required a warrant before a company is forced to turn it over. The so-called “Carpenter” decision was expected to result in a fall in the number of requests made because the bar to obtaining the records is far higher.
T-Mobile did not immediately respond to a request asking what caused the increase.
The cell giant also reported that the number of tower dumps went up from 4,855 requests in 2017 to 6,184 requests in 2018, an increase of 27%.
Tower dumps are particularly controversial because these include information for all subscribers whose calls, messages and data went through a cell tower at any given time. That can include the data of hundreds or thousands of innocent subscribers at any time.
Although T-Mobile says it requires a court order or a search warrant, the Carpenter decision does not affect police accessing data obtained from tower dumps.
T-Mobile currently has 81.3 million customers as of its last earnings call. The company is currently in the middle of a merger with Sprint for $26.5 billion. The Justice Department is reviewing the bid, but several states are looking to block the deal entirely.
In a long awaited decision, the Federal Elections Commission will now allow political campaigns to appoint cybersecurity helpers to protect political campaigns from cyberthreats and malicious attackers.
The FEC, which regulates political campaigns and contributions, was initially poised to block the effort under existing rules that disallow campaigns to receive discounted services for federal candidates because it’s treated as an “in kind donation.”
For now the ruling allows just one firm, Area 1 Security, which brought the case to the FEC, to assist federal campaigns to fight disinformation campaigns and hacking efforts, both of which were prevalent during the 2016 presidential election.
Campaigns had fought in favor of the proposal, fearing a re-run of 2016 in the upcoming presidential and lawmaker elections in 2020.
FBI director Christopher Wray said last in April that the recent disinformation efforts were “a dress rehearsal for the big show in 2020.”
In an opinion published Thursday, the FEC said the rules would be relaxed because Area 1 “would offer these services in the ordinary course of business and on the same terms and conditions as offered to similarly situated non-political clients.” In other words, political campaigns are not given a special deal but are offered the same price as others on its lowest tier of service.
Several other companies, like Facebook and Google-owned Jigsaw, have already offered free services to campaigns to fight disinformation and foreign hacking efforts.
However many political campaigns still are not taking basic security precautions, researchers found.
A spokesperson for Area 1 did not return a request for comment.
Here’s a thing that should have never been a thing: Bluetooth-connected hair straighteners.
Glamoriser, a U.K. firm that bills itself as the maker of the “world’s first Bluetooth hair straighteners“, allows users to link the device to an app, which lets the owner set certain heat and style settings. The app can also be used to remotely switch off the straighteners within Bluetooth range.
Big problem, though. These straighteners can be hacked.
Security researchers at Pen Test Partners bought a pair and tested them out. They found that it was easy to send malicious Bluetooth commands within range to remotely control an owner’s straighteners.
The researchers demonstrated that they could send one of several commands over Bluetooth, such as the upper and lower temperature limit of the device — 122°F and 455°F respectively — as well as the shut-down time. Because the straighteners have no authentication, an attacker can remotely alter and override the temperature of the straighteners and how long they stay on for — up to a limit of 20 minutes.
“As there is no pairing or bonding established over [Bluetooth] when connecting a phone, anyone in range with the app can take control of the straighteners,” said Stuart Kennedy in his blog post, shared first with TechCrunch.
There is a caveat, said Kennedy. The straighteners only allow one concurrent connection. If the owner hasn’t connected their phone or they go out of range, only then can an attacker target the device.
Here at TechCrunch we’re all for setting things on fire “for journalism,” but in this case the numbers speak for themselves. If, per the researchers’ findings, the straighteners could be overridden to the maximum temperature of 455°C at the timeout of 20 minutes, that’s setting up a prime condition for a fire — or at very least burn damage.
It’s estimated as many 650,000 house fires in the U.K. are caused by hair straighteners and curling irons left on. In some cases it can take more than a half-hour for these heated devices to cool down to safe levels. U.K. fire and rescue services have called on owners to physically pull the plug on their devices to prevent fires and damage.
Glamoriser did not respond to a request for comment prior to publication. The app hasn’t been updated since June 2018, suggesting a fix has yet to be put in place.
GDPR, and the newer California Consumer Privacy Act, have given a legal bite to ongoing developments in online privacy and data protection: it’s always good practice for companies with an online presence to take measures to safeguard people’s data, but now failing to do so can land them in some serious hot water.
Now — to underscore the urgency and demand in the market — one of the bigger companies helping organizations navigate those rules is announcing a huge round of funding. OneTrust, which builds tools to help companies navigate data protection and privacy policies both internally and with its customers, has raised $200 million in a Series A led by Insight that values the company at $1.3 billion.
It’s an outsized round for a Series A, being made at an equally outsized valuation — especially considering that the company is only three years old — but that’s because, according to CEO Kabir Barday, of the wide-ranging nature of the issue, and OneTrust’s early moves and subsequent pole position in tackling it.
“We’re talking about an operational overhaul in a company’s practices,” Barday said in an interview. “That requires the right technology and reach to be able to deliver that at a low cost.” Notably, he said that OneTrust wasn’t actually in search of funding — it’s already generating revenue and could have grown off its own balance sheet — although he noted that having the capitalization and backing sends a signal to the market and in particular to larger organizations of its stability and staying power.
Currently, OneTrust has around 3,000 customers across 100 countries (and 1,000 employees), and the plan will be to continue to expand its reach geographically and to more businesses. Funding will also go towards the company’s technology: it already has 50 patents filed and another 50 applications in progress, securing its own IP in the area of privacy protection.
OneTrust offers technology and services covering three different aspects of data protection and privacy management.
Its Privacy Management Software helps an organization manage how it collects data, and it generates compliance reports in line with how a site is working relative to different jurisdictions. Then there is the famous (or infamous) service that lets internet users set their preferences for how they want their data to be handled on different sites. The third is a larger database and risk management platform that assesses how various third-party services (for example advertising providers) work on a site and where they might pose data protection risks.
These are all provided either as a cloud-based software as a service, or an on-premises solution, depending on the customer in question.
The startup also has an interesting backstory that sheds some light on how it was founded and how it identified the gap in the market relatively early.
Alan Dabbiere, who is the co-chairman of OneTrust, had been the chairman of Airwatch — the mobile device management company acquired by VMware in 2014 (Airwatch’s CEO and founder, John Marshall, is OneTrust’s other co-chairman). In an interview, he told me that it was when they were at Airwatch — where Barday had worked across consulting, integration, engineering and product management — that they began to see just how a smartphone “could be a quagmire of information.”
“We could capture apps that an employee was using so that we could show them to IT to mitigate security risks,” he said, “but that actually presented a big privacy issue. If [the employee] has dyslexia [and uses a special app for it] or if the employee used a dating app, you’ve now shown things to IT that you shouldn’t have.”
He admitted that in the first version of the software, “we weren’t even thinking about whether that was inappropriate, but then we quickly realised that we needed to be thinking about privacy.”
Dabbiere said that it was Barday who first brought that sensibility to light, and “that is something that we have evolved from.” After that, and after the VMware sale, it seemed a no-brainer that he and Marshall would come on to help the new startup grow.
Airwatch made a relatively quick exit, I pointed out. His response: the plan is to stay the course at OneTrust, with a lot more room for expansion in this market. He describes the issues of data protection and privacy as “death by 1,000 cuts.” I guess when you think about it from an enterprising point of view, that essentially presents 1,000 business opportunities.
Indeed, there is obvious growth potential to expand not just its funnel of customers, but to add in more services, such as proactive detection of malware that might leak customers’ data (which calls to mind the recently-fined breach at British Airways), as well as tools to help stop that once identified.
While there are a million other companies also looking to fix those problems today, what’s interesting is the point from which OneTrust is starting: by providing tools to organizations simply to help them operate in the current regulatory climate as good citizens of the online world.
This is what caught Insight’s eye with this investment.
“OneTrust has truly established themselves as leaders in this space in a very short timeframe, and are quickly becoming for privacy professionals what Salesforce became for salespeople,” said Richard Wells of Insight. “They offer such a vast range of modules and tools to help customers keep their businesses compliant with varying regulatory laws, and the tailwinds around GDPR and the upcoming CCPA make this an opportune time for growth. Their leadership team is unparalleled in their ambition and has proven their ability to convert those ambitions into reality.”
Wells added that while this is a big round for a Series A it’s because it is something of an outlier — not a mark of how Series A rounds will go soon.
“Investors will always be interested in and keen to partner with companies that are providing real solutions, are already established and are led by a strong group of entrepreneurs,” he said in an interview. “This is a company that has the expertise to help solve for what could be one of the greatest challenges of the next decade. That’s the company investors want to partner with and grow, regardless of fund timing.”
Apple has released a silent update for Mac users removing a vulnerable component in Zoom, the popular video conferencing app, which allowed websites to automatically add a user to a video call without their permission.
The Cupertino, Calif.-based tech giant told TechCrunch that the update — now released — removes the hidden web server, which Zoom quietly installed on users’ Macs when they installed the app.
Apple said the update does not require any user interaction and is deployed automatically.
The video conferencing giant took flack from users following a public vulnerability disclosure on Monday by Jonathan Leitschuh, in which he described how “any website [could] forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.” The undocumented web server remained installed even if a user uninstalled Zoom. Leitschuh said this allowed Zoom to reinstall the app without requiring any user interaction.
He also released a proof-of-concept page demonstrating the vulnerability.
Although Zoom released a fixed app version on Tuesday, Apple said its actions will protect users both past and present from the undocumented web server vulnerability without affecting or hindering the functionality of the Zoom app itself.
The update will now prompt users if they want to open the app, whereas before it would open automatically.
Apple often pushes silent signature updates to Macs to thwart known malware — similar to an anti-malware service — but it’s rare for Apple to take action publicly against a known or popular app. The company said it pushed the update to protect users from the risks posed by the exposed web server.
Zoom spokesperson Priscilla McCarthy told TechCrunch: “We’re happy to have worked with Apple on testing this update. We expect the web server issue to be resolved today. We appreciate our users’ patience as we continue to work through addressing their concerns.”
More than four million users across 750,000 companies around the world use Zoom for video conferencing.
In 2017 — for the first time in over a decade — a computer worm ran rampage across the internet, threatening to disrupt businesses, industries, governments and national infrastructure across several continents.
The WannaCry ransomware attack became the biggest threat to the internet since the Mydoom worm in 2004. On May 12, 2017, the worm infected millions of computers, encrypting their files and holding them hostage to a bitcoin payment.
Train stations, government departments, and Fortune 500 companies were hit by the surprise attack. The U.K.’s National Health Service (NHS) was one of the biggest organizations hit, forcing doctors to turn patients away and emergency rooms to close.
Earlier this week we reported a deep-dive story into the 2017 cyberattack that’s never been told before.
British security researchers — Marcus Hutchins and Jamie Hankins — registered a domain name found in WannaCry’s code in order to track the infection. It took them three hours to realize they had inadvertently stopped the attack dead in its tracks. That domain became the now-infamous “kill switch” that instantly stopped the spread of the ransomware.
As long as the kill switch remains online, no computer infected with WannaCry would have its files encrypted.
But the attack was far from over.
In the days following, the researchers were attacked from an angry botnet operator pummeling the domain with junk traffic to try to knock it offline and two of their servers were seized by police in France thinking they were contributing to the spread of the ransomware.
Worse, their exhaustion and lack of sleep threatened to derail the operation. The kill switch was later moved to Cloudflare, which has the technical and infrastructure support to keep it alive.
Hankins described it as the “most stressful thing” he’s ever experienced. “The last thing you need is the idea of the entire NHS on fire,” he told TechCrunch.
Although the kill switch is in good hands, the internet is just one domain failure away from another massive WannaCry outbreak. Just last month two Cloudflare failures threatened to bring the kill switch domain offline. Thankfully, it stayed up without a hitch.
CISOs and CSOs take note: here’s what you need to know.