A popular family tracking app was leaking the real-time locations of more than 238,000 users for weeks after the developer left a server exposed without a password.
The app, Family Locator, built by Australia-based software house React Apps, allows families to track each other in real-time, such as spouses or parents wanting to know where their children are. It also lets users set up geofenced alerts to send a notification when a family member enters or leaves a certain location, such as school or work.
But the backend MongoDB database was left unprotected and accessible by anyone who knew where to look.
Sanyam Jain, a security researcher and a member of the GDI Foundation, found the database and reported the findings to TechCrunch.
Based on a review of the database, each account record contained a user’s name, email address, profile photo and their plaintext passwords. Each account also kept a record of their own and other family members’ real-time locations precise to just a few feet. Any user who had a geofence set up also had those coordinates stored in the database, along with what the user called them — such as “home” or “work.”
None of the data was encrypted.
TechCrunch verified the contents of the database by downloading the app and signing up using a dummy email address. Within seconds, our real-time location appeared as precise coordinates in the database.
We contacted one app user at random who, albeit surprised and startled by the findings, confirmed to TechCrunch that the coordinates found under their record were accurate. The Florida-based user, who did not want to be named, said that the database was the location of their business. The user also confirmed that a family member listed in the app was their child, a student at a nearby high school.
Several other records we reviewed also included the real-time locations of parents and their children.
On Friday, we asked Microsoft, which hosted the database on its Azure cloud, to contact the developer. Hours later, the database was finally pulled offline.
It’s not known precisely how long the database was exposed for. Singh still hasn’t acknowledged the data leak.
European governments have been bringing the hammer down on tech in recent months, slapping record fines and stiff regulations on the largest imports out of Silicon Valley. Despite pleas from the world’s leading companies and Europe’s eroding trust in government, European citizens’ staunch support for regulation of new technologies points to an operating environment that is only getting tougher.
According to a roughly 25-page report recently published by a research arm out of Spain’s IE University, European citizens remain skeptical of tech disruption and want to handle their operators with kid gloves, even at a cost to the economy.
The survey was led by the IE’s Center for the Governance of Change — an IE-hosted research institution focused on studying “the political, economic, and societal implications of the current technological revolution and advances solutions to overcome its unwanted effects.” The “European Tech Insights 2019” report surveyed roughly 2,600 adults from various demographics across seven countries (France, Germany, Ireland, Italy, Spain, The Netherlands, and the UK) to gauge ground-level opinions on ongoing tech disruption and how government should deal with it.
The report does its fair share of fear-mongering and some of its major conclusions come across as a bit more “clickbaity” than insightful. However, the survey’s more nuanced data and line of questioning around specific forms of regulation offer detailed insight into how the regulatory backdrop and operating environment for European tech may ultimately evolve.
On-demand video streaming site Kanopy has fixed a leaking server that exposed the detailed viewing habits of its users.
Security researcher Justin Paine discovered the leaking Elasticsearch database last week and warned Kanopy of the exposure. The server was secured two days later on March 18, a spokesperson told TechCrunch. “We are currently investigating the scope and cause as well as reviewing all of our security protocols.”
Kanopy is like Netflix but for classic movies and documentaries. The company partners with libraries and universities across the U.S. by allowing library card holders to access films for free.
In a blog post, Paine said the server contained between 25-40 million daily logs, which he said could have identified all the videos searched for and watched from a user’s IP address.
“Depending on the videos being watched — that potentially could be embarrassing information,” he wrote.
The logs also contained geographical information, timestamps, and device types, he said. He noted that there was no other personally identifiable information — such as usernames and email addresses — attached to the logs.
According to a report last year, Kanopy has more than 30,000 movies on its platform.
Flip the “days since last Facebook security incident” back to zero.
The discovery was made in January, said Facebook’s Pedro Canahuati, as part of a routine security review. None of the passwords were visible to anyone outside Facebook, he said. Facebook admitted the security lapse months later, after Krebs said logs were accessible to some 2,000 engineers and developers.
Krebs said the bug dated back to 2012.
“This caught our attention because our login systems are designed to mask passwords using techniques that make them unreadable,” said Canahuati. “We have found no evidence to date that anyone internally abused or improperly accessed them,” but did not say how the company made that conclusion.
Facebook said it will notify “hundreds of millions of Facebook Lite users,” a lighter version of Facebook for users where internet speeds are slow and bandwidth is expensive, and “tens of millions of other Facebook users.” The company also said “tens of thousands of Instagram users” will be notified of the exposure.
It’s believed as many as 600 million users could be affected, about one-fifth of the company’s 2.7 billion users.
Facebook didn’t say exactly how the bug came to be. Storing passwords in a readable, plaintext is an insecure way of storing passwords. Companies, like Facebook, hash and salt passwords — two ways of further scrambling passwords — to store passwords securely. That allows companies to verify a user’s password without knowing what it is.
It’s the latest in a string of embarrassing security issues at the company, prompting congressional inquiries and government investigations. It was reported last week that Facebook’s deals that allowed other tech companies to access account data without consent was under criminal investigation.
It’s not known why Facebook took months to confirm the incident, or if the company informed state or international regulators per U.S. breach notification and European data protection laws. We asked Facebook but a spokesperson did not immediately comment beyond the blog post.
The U.K.’s Police Federation has confirmed it’s been hit by a cyberattack.
The union-like organization, which represents the 43 police forces across England and Wales, described the event as a ransomware attack in a statement shared on Twitter.
“There is no evidence at this stage that any data was extracted from our systems but this cannot be discounted,” the organization said in a tweet.
The ransomware attack hit computers at the federation’s Surrey headquarters on March 9, but was only revealed Thursday. Several databases and email systems were encrypted, the organization said, leading to some disruption to its services. Its backup data had also been deleted.
None of the other 43 branches across the U.K. were affected, the statement said.
The National Crime Agency is investigating the attack, which the Police Federation said was “not targeted specifically” at the police organization but likely part of a wider campaign.
The organization reported the incident to the U.K.’s data protection regulator on March 11, within the required three days under European law.
A spokesperson for the Police Federation did not comment beyond the organization’s statements, and referred comment to the National Crime Agency, which did not immediately respond to questions.
The news comes two days after Norwegian aluminum manufacturer Norsk Hydro was hit by a strain of LockerGoga, a new kind of ransomware that first emerged earlier this year. The manufacturing giant said Thursday most of its operations have been restored.
Microsoft today announced that it is bringing its Microsoft Defender Advanced Threat Protection (ATP) to the Mac. Previously, this was a Windows solution for protecting the machines of Microsoft 365 subscribers and assets the IT admins that try to keep them safe. It was also previously called Windows Defender ATP, but given that it is now on the Mac, too, Microsoft decided to drop the ‘Windows Defender’ moniker in favor or ‘Microsoft Defender.’
“For us, it’s all about experiences that follow the person and help the individual be more productive,” Jared Spataro, Microsoft’s corporate VP for Office and Windows, told me. “Just like we did with Office back in the day — that was a big move for us to move it off of Windows-only — but it was absolutely the right thing. So that’s where we’re headed.”
He stressed that this means that Microsoft is moving off its “Windows-centric approach to life.” He likened it to bringing the Office apps to the iPad and Android. “We’re just headed in that same direction of saying that it’s our intent that we can secure every endpoint so that this Microsoft 365 experience is not just Windows-centric,” Spataro said. Indeed, he argued that the news here isn’t even so much the launch of this service for the Mac but that Microsoft is reorienting the way it thinks about how it can deliver value for Microsoft 365 clients.
Given that Microsoft Defender is part of the Microsoft 365 package, you may wonder why those users would even care about the Mac, but there are plenty of enterprises that use a mix of Windows machines and Mac, and which provide all of their employees with Office already. Having a security solution that spans both systems can greatly reduce complexity for IT departments — and keeping up with security vulnerabilities on one system is hard enough to begin with.
In addition to the launch of the Mac version of Microsoft Defender ATP, the company also today announced the launch of new threat and vulnerability management capabilities for the service. Over the last few months, Microsoft had already launched a number of new features that help businesses proactively monitor and identify security threats.
“What we’re hearing from customers now, is that the landscape is getting increasingly sophisticated, the volume of alerts that we’re starting to get is pretty overwhelming,” Spataro said. “We really don’t have the budget to hire the thousands of people required to sort through all this and figure out what to do.”
So with this new tool, Microsoft uses its machine learning smarts to prioritize threads and present them to its customers for remediation.
To Spataro, these announcements come down to the fact that Microsoft is slowly morphing into more of a security company than ever before. “I think we’ve made a lot more progress than people realize,” he said. “And it’s been driven by the market.” He noted that its customers have long asked Microsoft to help them protect their endpoints. Now, he argues, customers have realized that Microsoft is now moving to this person-centric approach (instead of a Windows-centric one) and that the company may now be able to help them protect large parts of their systems. At the same time, Microsoft realized that it could use all of the billions of signals it gets from its users to better help its customers proactively.
Microsoft has rolled out a patch that will warn Windows 7 users that security updates will soon come to an end.
The patch rolled out Wednesday warning users of the impending deadline, January 14, 2020, when the software giant will no longer roll out fixes for security flaws and vulnerabilities. The deadline comes some ten years after Windows 7 first debuted in 2009, more than half a decade before Microsoft’s most recent operating system Windows 10 was introduced.
Microsoft’s move to stop issuing security updates is part of the company’s ongoing effort to push users to its latest software, which stands on a greater security foundation and improvements to mitigate attacks.
Starting April 18, users on Windows 7 will begin receiving warnings about the approaching cut-off.
Windows 7 still commands some 40 percent of the desktop market, according to Net Applications. With exactly 300 days before the deadline, the clock is ticking on consumer security support.
Enterprise customers have the option to pay for extended security updates until 2023.
For years, Microsoft allowed Windows 7 users to upgrade to Windows 10 for free to try to encourage growth and upgrades. With those incentives gone, many only have the lack of security updates to look ahead to, which will put business data and systems at risk of cyberattack.
It’s almost unheard of for Microsoft to patch end-of-life software. In 2017, Microsoft released rare security patches Windows XP — retired three years earlier — to prevent the spread of WannaCry, a ransomware strain that piggybacked off leaked hacking tools, developed by the National Security Agency.
The ransomware outbreak knocked schools, businesses and hospitals offline.
Windows 7’s successor, Windows 8, will continue to receive updates until January 10, 2023.
Over the past several years, the law enforcement community has grown increasingly concerned about the conduct of digital investigations as technology providers enhance the security protections of their offerings—what some of my former colleagues refer to as “going dark.”
Data once readily accessible to law enforcement is now encrypted, protecting consumers’ data from hackers and criminals. However, these efforts have also had what Android’s security chief called the “unintended side effect” of also making this data inaccessible to law enforcement. Consequently, many in the law enforcement community want the ability to compel providers to allow them to bypass these protections, often citing physical and national security concerns.
I know first-hand the challenges facing law enforcement, but these concerns must be addressed in a broader security context, one that takes into consideration the privacy and security needs of industry and our citizens in addition to those raised by law enforcement.
Perhaps the best example of the law enforcement community’s preferred solution is Australia’s recently passed Assistance and Access Bill, an overly-broad law that allows Australian authorities to compel service providers, such as Google and Facebook, to re-engineer their products and bypass encryption protections to allow law enforcement to access customer data.
While the bill includes limited restrictions on law enforcement requests, the vague definitions and concentrated authorities give the Australian government sweeping powers that ultimately undermine the security and privacy of the very citizens they aim to protect. Major tech companies, such as Apple and Facebook, agree and have been working to resist the Australian legislation and a similar bill in the UK.
Newly created encryption backdoors and work-arounds will become the target of criminals, hackers, and hostile nation states, offering new opportunities for data compromise and attack through the newly created tools and the flawed code that inevitably accompanies some of them. These vulnerabilities undermine providers’ efforts to secure their customers’ data, creating new and powerful vulnerabilities even as companies struggle to address existing ones.
And these vulnerabilities would not only impact private citizens, but governments as well, including services and devices used by the law enforcement and national security communities. This comes amidst government efforts to significantly increase corporate responsibility for the security of customer data through laws such as the EU’s General Data Protection Regulation. Who will consumers, or the government, blame when a government-mandated backdoor is used by hackers to compromise user data? Who will be responsible for the damage?
Companies have a fiduciary responsibility to protect their customers’ data, which not only includes personally identifiable information (PII), but their intellectual property, financial data, and national security secrets.
Worse, the vulnerabilities created under laws such as the Assistance and Access Bill would be subject almost exclusively to the decisions of law enforcement authorities, leaving companies unable to make their own decisions about the security of their products. How can we expect a company to protect customer data when their most fundamental security decisions are out of their hands?
Thus far law enforcement has chosen to downplay, if not ignore, these concerns—focusing singularly on getting the information they need. This is understandable—a law enforcement officer should use every power available to them to solve a case, just as I did when I served as a State Trooper and as a FBI Special Agent, including when I served as Executive Assistant Director (EAD) overseeing the San Bernardino terror attack case during my final months in 2015.
Decisions regarding these types of sweeping powers should not and cannot be left solely to law enforcement. It is up to the private sector, and our government, to weigh competing security and privacy interests. Our government cannot sacrifice the ability of companies and citizens to properly secure their data and systems’ security in the name of often vague physical and national security concerns, especially when there are other ways to remedy the concerns of law enforcement.
That said, these security responsibilities cut both ways. Recent data breaches demonstrate that many companies have a long way to go to adequately protect their customers’ data. Companies cannot reasonably cry foul over the negative security impacts of proposed law enforcement data access while continuing to neglect and undermine the security of their own users’ data.
Providers and the law enforcement community should be held to robust security standards that ensure the security of our citizens and their data—we need legal restrictions on how government accesses private data and on how private companies collect and use the same data.
There may not be an easy answer to the “going dark” issue, but it is time for all of us, in government and the private sector, to understand that enhanced data security through properly implemented encryption and data use policies is in everyone’s best interest.
The “extra ordinary” access sought by law enforcement cannot exist in a vacuum—it will have far reaching and significant impacts well beyond the narrow confines of a single investigation. It is time for a serious conversation between law enforcement and the private sector to recognize that their security interests are two sides of the same coin.
Norsk Hydro, one of the largest global aluminum manufacturers, has confirmed its operations have been disrupted by a ransomware attack.
The Oslo, Norway-based company said in a brief statement that the attack, which began early Tuesday, has impacted “most business areas,” forcing the aluminum maker to switch to manual operations.
“Hydro is working to contain and neutralize the attack, but does not yet know the full extent of the situation,” the company said in a statement posted to Facebook. It’s understood that the ransomware disabled a key part of the company’s smelting operations.
Employees were told to “not connect any devices” to the company’s network. Norsk Hydro’s website was also down at the time of writing.
The company manufacturers aluminum products, manufacturing close to half a million tons each year, and is also a significant provider hydroelectric power in the Nordic state.
Reuters said operations in Qatar and Brazil were also under manual operation, but the company said in a public disclosure with the Norwegian stock exchange there was “no indication” of impact on primary plants outside Norway.
“It is too early to assess the full impact of the situation. It is too early to assess the impact on customers,” said the aluminum maker.
Norway’s National Security Authority did not immediately respond to an email with questions, but told Reuters that the infection is likely LockerGoga, a new kind of digitally signed ransomware that went undetected until recently. The ransomware locks files and demands a ransom payment for a decryption key.
Security expert Kevin Beaumont said earlier this month the malware was also used to target Altran, a Paris, France-based consulting firm, last month. Beaumont said the malware doesn’t require a network connection or a command and control server like other ransomware strains. A sample of the ransomware shared to malware analysis site VirusTotal shows only a handful of anti-malware products can detect and neutralize the LockerGoga malware.
Norsk Hydro spokesperson Stian Hasle did not immediately comment.
In the space of six months, one security researcher found thousands of files from dozens of computers, phones and flash drives — most of which contained personal information.
All the researcher did was scour the second-hand stores for donated and refurbished tech.
New research published by security firm Rapid7 revealed how problematic discarded technology can be. For his research, Josh Frantz bought 85 devices for $650, and found over 366,300 files, including images and documents.
After an analysis of each device, Frantz found email addresses, dates of birth, Social Security and credit card numbers, driver’s license data and passport numbers.
Only two devices were properly wiped, he said.
Shy of going into a forensic-level search, the researcher suggested he could have rinsed even more data from his cache of refurbished devices.
Although the responsibility arguably rests with the person who donates their device, Frantz said his research revealed many businesses also don’t wipe data from the devices people turn over — despite promises and guarantees to the contrary.
Discovering data from discarded drives seems only to be getting worse.
A similar experiment done in 2012 found half of the devices obtained still contained personal information. A recent study by the University of Hertfordshire reported two-thirds of the 200 USB drives bought from eBay had private and sensitive files — including wage slips, job applications, and even nude photos in some cases.
Worse, discarded devices can open people up to hacking. Researchers recently revealed that throwing away cheap Internet of Things devices can be recovered to obtain wireless network passwords, allowing an attacker to gain a foothold into a network.
It’s the latest reminder to dispose of devices properly after they’re no longer used. Data can reside on discarded computers and drives for years — often withstanding the elements. Even erasing a device to factory reset isn’t always enough to prevent data recovery.
Frantz listed among the favorite: a hammer, industrial shredding — or, for the extreme cases, thermite.
It’s not to say you shouldn’t donate. Just, maybe keep your hands the hard drive.
Several Sprint customers have said they are seeing other customers’ personal information in their online accounts.
One reader emailed TechCrunch with several screenshots describing the issue, warning that they could see other Sprint customers’ names and phone numbers. The reader said they informed the phone giant of the issue, and a Sprint representative said they had “several calls pertaining to the same issue.”
In all, the reader saw 22 numbers in a two-hour period, they said.
Several other customers complained of the same data exposing bug. It’s unclear how widespread the issue is or for how long the account information leak persisted.
Logged in to pay my @sprint bill, saw what looked like the details of another user. Did this 3 times. I called, rep said they’d been getting other similar calls. Advice on clarifying if this is the privacy breach it looks like? @EFF @publiccitizen @NCLC4consumers @eyywa
— Kylie B-C (@notthatkylie) March 14, 2019
I was just able to access a bunch of random peoples Sprint bill information when trying to check my own. Holy major security breach! @sprint
— Cody Rose (@cody_rose) March 19, 2019
@sprint are you having a known issue with your website?! I’m trying to set permissions on my account and some other damil’s information is on my account!
— Thelma Cheeks (@Tcheeksiamhair) March 19, 2019
Another customer told TechCrunch how the Sprint account pages were initially throwing errors. The customer said they scrolled down their account page and saw several numbers that were not theirs. “I was able to click each one individually and see every phone call they made, the text messages they used, and the standard info, including caller ID name they have set,” the customer told TechCrunch.
Of the customers we’ve spoken to, some are pre-paid and others are contract.
We’ve reached out to Sprint for more but did not hear back. We’ll update when more comes in.
Slack announced today that it is launching Enterprise Key Management (EKM) for Slack, a new tool that enables customers to control their encryption keys in the enterprise version of the communications app. The keys are managed in the AWS KMS key management tool.
Geoff Belknap, chief security officer (CSO) at Slack, says that the new tool should appeal to customers in regulated industries, who might need tighter control over security. “Markets like financial services, health care and government are typically underserved in terms of which collaboration tools they can use, so we wanted to design an experience that catered to their particular security needs,” Belknap told TechCrunch.
Slack currently encrypts data in transit and at rest, but the new tool augments this by giving customers greater control over the encryption keys that Slack uses to encrypt messages and files being shared inside the app.
He said that regulated industries in particular have been requesting the ability to control their own encryption keys including the ability to revoke them if it was required for security reasons. “EKM is a key requirement for growing enterprise companies of all sizes, and was a requested feature from many of our Enterprise Grid customers. We wanted to give these customers full control over their encryption keys, and when or if they want to revoke them,” he said.
Belknap says that this is especially important when customers involve people outside the organization such as contractors, partners or vendors in Slack communications. “A big benefit of EKM is that in the event of a security threat or if you ever experience suspicious activity, your security team can cut off access to the content at any time if necessary,” Belknap explained.
In addition to controlling the encryption keys, customers can gain greater visibility into activity inside of Slack via the Audit Logs API. “Detailed activity logs tell customers exactly when and where their data is being accessed, so they can be alerted of risks and anomalies immediately,” he said. If a customer finds suspicious activity, it can cut off access.
EKM for Slack is generally available today for Enterprise Grid customers for an additional fee. Slack, which announced plans to go public last month, has raised over $1 billion on a $7 billion valuation.
A health tech company was leaking thousands of doctor’s notes, medical records, and prescriptions daily after a security lapse left a server without a password.
The little-known software company, California-based Meditab, bills itself as one of the leading electronic medical records software makers for hospitals, doctor’s offices, and pharmacies. The company, among other things, processes electronic faxes for healthcare providers, still a primary method for sharing patient files to other providers and pharmacies.
But that fax server wasn’t properly secured, according to the security company that discovered the data.
SpiderSilk, a Dubai-based cybersecurity firm, told TechCrunch of the exposed server. The exposed fax server was running a Elasticsearch database with over six million records since its creation in March 2018.
Because the server had no password, anyone could read the transmitted faxes in real-time — including their contents.
According to a brief review of the data, the faxes contained a host of personally identifiable information and health information, including medical records, doctor’s notes, prescription amounts and quantities, as well as illness information, such as blood test results. The faxes also included names, addresses, dates of birth, and in some cases Social Security numbers and health insurance information and payment data.
The faxes also included personal data and health information on children. None of the data was encrypted.
The server was hosted on an subdomain of MedPharm Services, a Puerto Rico-based affiliate of Meditab, both founded by Kalpesh Patel. MedPharm was spun out as a separate company in San Juan to take advantage of tax breaks for those who set up businesses on the island.
TechCrunch verified the records by contacting several patients who confirmed their details from the faxes.
When reached about the security lapse, Patel said the company was “looking into the issue to identify the problem and solution,” but deferred comment to the company’s general counsel, Angel Marrero.
“We are still reviewing our logs and records to access the scope of any potential exposure,” said Marrero in an email.
We asked if the company planned to inform regulators and customers. Marrero said the company “will comply with any and all required notifications under current federal and state laws and regulations, as applicable.”
It’s not immediately known if anyone else discovered the exposed server, or how long the data was exposed.
Both Meditab and MedPharm claim to be compliant with HIPAA, the Health Insurance Portability and Accountability Act, which governs how healthcare providers properly manage patient data security.
Companies that expose data or violate the law can face hefty fines.
Last year was a year of “record” fines — some $25 million for several exposures and breaches, including $4.3 million in fines to the University of Texas for an inadvertent disclosure of encrypted personal health data, and a settlement by Fresenius was for $3.5 million following five separate breaches.
A spokesperson for the U.S. Department of Health and Human Services did not comment.
Facebook said it removed 1.5 million videos from its site within the first 24 hours after a shooter livestreamed his attack on two New Zealand mosques, killing 50 people.
In a series of tweets, Facebook’s Mia Garlick said a total of 1.2 million videos were blocked at the point of upload. Videos that included “praise or support” from the attack were also removed, she said, using a mix of automated technologies — like audio detection — and human content moderators.
Facebook did not say why the 300,000 videos were not caught at upload, representing a 20 percent failure rate.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload…
— Facebook Newsroom (@fbnewsroom) March 17, 2019
The cherry-picked “vanity” statistics only account for the total number of uploaded videos that Facebook knows about. TechCrunch found several videos posted to Facebook more than 12 hours after the attack. Some are calling on Facebook to release the engagement figures — such as how many views, shares and reactions — were made before the videos were taken down, which critics say is a more accurate measure of how far the videos spread.
The attack on Friday targeted worshippers during morning prayers in Christchurch, New Zealand. Police said they apprehended the shooter about half an hour after reports of the first attack came in.
The 28-year old suspected shooter, charged with murder, livestreamed the video to Facebook using a head-mounted camera, typically used to record sporting events in first-person. Facebook closed the attacker’s account within an hour of the attack, but the video had already been shared across Facebook, Twitter and YouTube. The shooter described himself as a self-professed fascist, according to a “manifesto” he posted shortly before the attacks. The tech companies have faced criticism for not responding to the emerging threat of violence associated with white nationalism, compared to actions taken against content in support of the so-called Islamic State group and the spread of child abuse imagery,
New Zealand prime minister Jacinda Ardern said on Sunday that social media giants like Facebook had to face “further questions” about their response to the event. Facebook second-in-command Sheryl Sandberg reportedly reached out to Ardern following the attacks.
When reached, Facebook did not comment beyond Garlick’s tweeted comments.
Democratic presidential candidate Beto O’Rourke has revealed he was a member of a notorious decades-old hacking group.
The former congressman was a member of the Texas-based hacker group, the Cult of the Dead Cow, known for inspiring early hacktivism in the internet age and building exploits and hacks for Microsoft Windows. The group used the internet as a platform in the 1990s to protest real-world events, often to promote human rights and denouncing censorship. Among its many releases, the Cult of the Dead Cow was best known for its Back Orifice program, a remote access and administration tool.
O’Rourke went by the handle “Psychedelic Warlord,” as revealed by Reuters, which broke the story.
But as he climbed the political ranks, first elected to the El Paso city council in 2005, he reportedly grew concerned that his membership with the group would harm his political aspirations. The group’s members kept O’Rourke’s secret safe until the ex-hacker confirmed to Reuters his association with the group.
Reuters described him as the “most prominent ex-hacker in American political history,” who on Thursday announced his candidacy for president of the United States.
If he wins the White House, he would become the first hacker president.
O’Rourke’s history sheds light on how the candidate approaches and understands the technological issues that face the U.S. today. He’s one of the few presidential candidates to run for the White House with more than a modicum of tech knowledge — and the crucial awareness of the good and the problems tech can bring at a policy level.
“I understand the democratizing power of the internet, and how transformative it was for me personally, and how it leveraged the extraordinary intelligence of these people all over the country who were sharing ideas and techniques,” O’Rourke told Reuters.
The 46-year-old has yet to address supporters about the new revelations.
Digital identity startup Passbase has bagged $600k in pre-seed funding led by a group of business angel investors from Alphabet, Stanford, Kleiner Perkins, EY; as well as seed fund investment from Chicago-based Upheaval Investments and Seedcamp.
The 2018-founded Silicon Valley-based startup — whose co-founder we chatted to briefly on camera at Disrupt Berlin — is building what it dubs an “identity engine” to simplify identity verification online.
Passbase offers a set of SDKs to developers to integrate facial recognition, liveness detection, ID authenticity checks and ID information extraction into their service, while also baking in privacy protections that allow individual users to control their own identity data.
A demo video of the verification product shows a user being asked to record a FaceID-style 3D selfie by tilting their face in front of a webcam and then scanning an ID document also by holding it up to the camera.
On the developer front, the flagship claim is Passbase’s identity verification product can be deployed to a website or mobile app in less than three minutes, with just seven lines of code.
Co-founder Mathias Klenk tells TechCrunch the system architecture draws on ideas from public-private key encryption, blockchain, and biometric authentication — and is capable of completing “zero-knowledge authentications”.
In practice that means a website visitor or app user can prove who they are (or how old they are) without having to share their full identity document with the service.
Klenk, a Stanford alumni, says the founding team pivoted to digital identity in the middle of last year after their earlier startup — a crypto exchange management app called Coinance — ran into regulatory difficulties right after they’d decided to go full-time on the project.
He says they got a call from Apple, in August 2018, informing them Coinance had been pulled from the AppStore. The issue was they needed to be able to comply with know your customer (KYC) requirements as regulators cracked down on the risk of cryptocurrency being used for money laundering.
“With a quick call to our lawyers, we learned it was because we now needed to complete strong identity verification with every exchange integrated for every user in order to fulfil our KYC obligations,” explains Klenk. “This is how our pivot to Passbase began.”
The experience with Coinance convinced Klenk and his two co-founders — Felix Gerlach (an ex-Rocket Internet product manager/designer) and Dave McGibbon (previously an investment associate at GoogleX) — that there was a “huge opportunity” to build a ‘full-stack’ identity verification tool that was easy for engineering teams to integrate. So it sounds like it’s thinking along similar lines to Estonian startup Veriff.
Klenk claims current vendors “take weeks to integrate and charged thousands of dollars from the start”. And in classic startup formula fashion he too condenses the idea down to: “Stripe for Identity Verification” — arguing that: “In order to solve digital identity verification, you cannot only streamline the identity verification process, you need to enable identity ownership and reuse across different services.”
At the same time, Klenk says the founding team saw a growing need for a privacy-focused identity verification tool — to “protect people’s information by design and help companies collect only the information they need”.
On this he freely cites Europe’s General Data Protection Regulation as an inspiring force. (“GDPR is built into the DNA of this product,” is the top-line claim.)
“Companies gain access to users information in a secure enclave, and avoid the dangers of getting hacked and leaking sensitive information,” says Klenk, describing the system architecture for verification as the core IP of the business.
They’re in the process of filing patents for the “developed technology”, working with two technical advisors, he adds.
Passbase’s verification stack itself involves modular pieces so that it can adapt to changing threats, as Klenk tells it.
The startup is partnering with service providers for various verification components. Though he says it also has in-house computer vision experts who have built its anti-spoofing and liveness detection.
“This will always be an arms race against the latest spoofing tactics. We plan to stay ahead of the curve by introducing multi-factor authentication techniques and partnering with the best technology providers,” he adds on that.
He says they’re also working with a US-based security company and other security experts to test the robustness and security of their system on an ongoing basis, adding: “We are planning to obtain all required certifications to ensure the security of our system e.g. ISO, Fido.”
Passbase’s product is currently in a closed beta with more than 200 companies signed up to its early access program.
Five have been “handpicked and onboarded” for a closed pilot — and Klenk says it’s now running tests and figuring out final requirements for an open beta launch planned for the middle of this year.
“Our early customers are mostly trust-based marketplaces (like an Airbnb),” he tells TechCrunch. “We are adding features such as PEP, OFAC, and others over the next month to allow us to also service the mobility space, age-restricted products, and eventually online banking and fintechs with KYC obligations.”
The startup’s first tranche of investor funding will be used for building out its core tech and mobile apps — while also “delighting our first clients with our B2B solution, getting traction, nailing product market fit”, as Klenk puts it.
He emphasizes that they’re also keen to nail a healthy startup culture from the get-go — saying that building “an exciting and inclusive place to work” is a priority. (“Since many high growth startups dropped the priority for this in order for growth. We want to get this right from the beginning.”)
On the competitive front, Passbase is certainly driving into a noisy arena with no shortage of past effort and current players touting identity and digital verification services — albeit, all that activity underlines the high demand level for robust online verification.
Demand that’s likely to rise as more policymakers and governments wake up to the risks and challenges posed by online fakes — and prepare to regulate Internet firms.
Discussing the competitive landscape, Klenk name-checks Jumio, Onfido, and Veriff in the identity verification space, though he argues Passbase’s “developer-focused go-to-market and focus on creating digital identity” creates a different set of incentives which he also claims “allow us to get really creative on price and auxiliary offerings”.
“Our competition cares about price x volume. We care about creating a robust and secure network of trusted user-owned digital identities,” he suggests.
On the digital identity from he points to Civic, Verimi, and Authenteq as being focused on “digital and self-sovereign identity”, though he says they have “tended” to take a B2C approach vs Passbase’s “full-stack” developer offering which he claims is “immediately useful to a large market of players”.
There’s clearly plenty still to play for where digital identity is concerned. It remains a complex and challenging problem that loops in all sorts of entities, touchpoints and responsibilities.
But add privacy considerations into the mix and Passbase’s hope is that, by going the extra mile to build a zero-knowledge architecture, it can become a key player.
Gearbest, a Chinese online shopping giant, has exposed millions of user profiles and shopping orders, security researchers have found.
Security researcher Noam Rotem found an Elasticsearch server leaking millions of records each week, including customer data, orders, and payment records. The server wasn’t protected with a password, allowing anyone to search the data.
Gearbest ranks as one of the top 250 global websites, and serves top brands, including Asus, Huawei, Intel, and Lenovo.
TechCrunch contacted GearBest — and through its dedicated security page — to secure the database. The company neither secured the data nor responded to our request for comment.
Rotem, who shared his findings with TechCrunch and published his report at VPNMentor, said names, addresses, phone numbers, email addresses and customer orders and products purchased were among the data exposed. The database also had payment and invoice information, with amount spent and semi-masked names and email addresses.
After reviewing a portion of the data, TechCrunch found the database revealed exactly what customers bought, when, and where the items were sent.
Some of the member-specific records also included passport numbers and other national ID data. Rotem said there was little evidence of encryption, and in some cases none at all.
“The content of some people’s orders has proven very revealing,” Rotem said. Not only are the exposed orders a breach of customer privacy, the exposed data could put customers in parts of the world where freedom of speech and expression is limited in danger. Some of the listings for sex toys and other intimate purchases, for example, could lead to legal repercussions where LGBTQ+ relationships or pre-marital sex are banned.
Countries like the United Arab Emirates and Pakistan have some of the strictest laws, which can lead to punishment by death.
Rotem also found a separate exposed web-based database management system on the same IP address, allowing anyone to manipulate or disrupt the databases run by Gearbest’s parent company, Globalegrow,
It’s not known exactly for how long the server was exposed. Data from internet scanning site Binary Edge showed the database was first detected on March 7.
Shenzhen-based Gearbest has a large presence in Europe, with warehouses in Spain, Poland, and Czech Republic, and the U.K., where EU data protection and privacy laws apply. Any company violating the General Data Protection Regulation (GDPR) can be fined up to four percent of its global revenue.
This is the second security issue at Gearbest in as many years. In December 2017, the company confirmed accounts had been breached after what was described as a credential stuffing attack.
Each week, Extra Crunch members have access to conference calls moderated by the TechCrunch writers you read every day. This week, security reporter Zack Whittaker discussed his exclusive report about Tufts University veterinary student Tiffany Filler who was expelled on charges she hacked her grades. Being Canadian and therefore in the U.S. on a student visa, she had to immediately leave the country.
From the transcript:
Firstly, given the legal risks, the potential public relations nightmare, and the ethics behind what looked like a failed due process, why didn’t Tufts hire a third-party forensics team to investigate the incident, especially given the nature of the allegations?
Secondly, how did Tufts decide that the student was to blame for these hacks? Attribution for any hack or cyber attack is often difficult, if not impossible. And the school’s IT department showed no evidence it was qualified to investigate the source of the breaches and demonstrates a clear lack of forensics, given the conclusions it came to, according to a forensics expert we spoke to.
This was definitely one of the toughest stories I’ve had to report in years, in the last seven or eight years, covering cybersecurity, national security. Those are rare for security reporters to focus on a single person for the reporting. Typically I write about data breaches or vulnerabilities or hacks that affect thousands, if not millions, of users around the world.
But this story was far too interesting not to dig into. We tried not to determine whether or not she was guilty or innocent. The fact of the matter is that both sides had conflicting evidence, but Filler offered … it was everything, and Tufts declined to comment on 19 very specific questions we sent.
This is a deeper look into a complicated story that also contains lessons for startups in varying stages of existence. Read on.
For access to Whittaker’s full transcription and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free.
Eric: This is Eric Eldon, the managing editor of Extra Crunch, and with me today is Zack Whittaker, our security correspondent, who covers a wide range of security and hacking issues and a variety of things. Over the past year, he has been doing a deep investigation into a rather troubling case that has happened at Tufts University .
For the format today, Zack is going to tell us all about his approach for the next few minutes in his own words. Then he will open it up to questions for all of you on the phone as well as myself. So without further ado, I’ll let Zack get started.
Zack: Yeah, thanks a lot Eric, much appreciated. Big thanks to everyone who read the story. It took a long time to get this far. The story went out on Friday. It’s received a really good reception, very happy with it. I’m very tired, for what it’s worth. Yeah, this took a long time, weeks and weeks of talking to people, calling people, and trying to figure out exactly what happened here.
Even then, we still ended with more questions than answers. For anyone who read, this was a very deep story, a deep-dive story about a veterinary student, who was accused of hacking her grades. Tufts University pulls this student, her name was Tiffany Filler, out of her classes. She was still in her bloodied scrubs from treating patients, and faced several accusations from the university. Tufts said she systematically broke into several user accounts, modified permission access to those accounts, and changed grades of others.
The school says its IT department used extensive logs and database records to trace activity back to her computer, based off a unique identifier and a Mac address, as well as using other indicators, such as the network she was allegedly using, the campus’s wireless network, or her own off-campus residence.
Newly released documents reveal Immigration and Customs Enforcement is tracking and targeting immigrants through a massive license plate reader database supplied with data from local police departments — in some cases violating sanctuary laws.
The documents, obtained by a Freedom of Information lawsuit filed by the American Civil Liberties Union and released Tuesday, reveal the vehicle surveillance system collects over a hundred million license plates a month from some of the largest cities in the U.S., including New York and Los Angeles, both of which are covered under laws limiting police cooperation with immigration agencies.
More than 9,000 ICE agents have access to the database, run by Vigilant Solutions, feeding some six billion vehicle detection records into Thomson Reuters’ investigative platform LEARN, which police departments can buy access to.
“The public has a right to know when a government agency — especially an immoral and rogue agency such as ICE — is exploiting a mass surveillance database that is a threat to the privacy and safety of drivers across the United States,” said Vasudha Talla, staff attorney with the ACLU of Northern California, in an email to TechCrunch.
Talla, who sued ICE to release the documents, said the government “should not have unfettered access to information that reveals where we live, where we work, and our private habits.” Critics have noted several high-profile cases of police misusing and improperly accessing license plate data.
Automatic license plate readers (ALPR) scan and detect license plates, along with the time, date and location from thousands of cameras installed across the country to spot criminals and fugitives with warrants out for their arrest. The ACLU previously called it one of the new and emerging forms of mass surveillance in the United States. Companies like Vigilant feed data collected from ALPR cameras into databases accessible to law enforcement and federal agencies, which the ACLU accused ICE of using to find and deport immigrants.
ICE has a “hot list” of more than 1,100 license plates of suspects, felons, or other subjects of interest, according to the documents released. Plates on the hot list trigger an alert to ICE agent that the vehicle has been spotted, including where and when.
“Hot lists are just one method by which ICE agents can track drivers with this system,” said Talla.
A spokesperson for ICE did not comment by our deadline on how many hot list detections led to deportations or removals from the U.S. Spokespeople for Thomson Reuters and Vigilant Solutions also did not comment.
It’s the third effort by ICE to secure access to the database in the past five years, after earlier attempts in 2014 over privacy concerns and 2015 over price negotiations failed. The agency rushed to secure the contract before a planned hike in cost by Thomson Reuters towards the end of 2017.
ICE spent $6.1 million on its latest contract in February 2018, gaining access to 80 law enforcement agencies covering almost two-thirds of the U.S. population. To allay fears of potential misuse, the agency was required to pass a revised privacy impact assessment explaining how ICE can and cannot use the license plate data. In one released email to an NPR reporter, ICE said agents “can only access data” uploaded by police departments if they elect to share through the system.
But the ACLU found emails of ICE agents directly contacting local law enforcement officers to ask for license plate search data, circumventing the database.
Over a years-long effort, one ICE agent — whose name was redacted by the government — sent several requests to a La Habra police detective by email for asking for license plate data.
La Habra is one of 169 police departments in California, and is one of dozens of departments known to use ALPR. But the city’s police department is not on Vigilant’s list of law enforcement partners that supply license plate data to ICE, the documents show.
We asked Jerry Price, chief of police at La Habra Police Department, if turning over records to ICE was in violation of California’s sanctuary status but would not comment.
“By going to local police informally, ICE is able to access locally collected driver location data without having to ask for formal access to the local system through the LEARN network, which could trigger local oversight or concern,” said Talla.
Other police departments were named as partners that actively feed data into the ICE-accessible database, like Upland, Merced, and Union City — three cities in California, which in 2018 passed state-wide laws that offer sanctuary to immigrants who might be in the country illegally or otherwise subject to deportation by ICE. The laws prohibit law enforcement in the state from sharing of license plate data with federal agencies, said Talla.
When reached, Union City police department chief Victor Derting did not comment. Spokespeople for Upland and Merced police departments did not respond to a request for comment.
The ACLU called on the immediate end to the license plate information sharing.
The documents also revealed how ICE initially considered trying to keep the database a secret, arguing that disclosing the capability would “almost immediately diminish its effectiveness as a law enforcement tool.”
Amid a controversial and questionable national emergency declared by the Trump administration, ICE remains a divisive agency more than ever. Last year, 19 of the top ICE investigators that investigate serious criminal cases, like drug smuggling and sex trafficking rings, called on the government to distance their work from ICE’s enforcement and removal operations unit, which investigates immigration violations and handles deportations.
In January, TechCrunch revealed dozens of ALPR cameras are still exposed on the internet — many of which are accessible without a password.
Security researchers have found a new kind of mobile adware hidden in hundreds of Android apps, and downloaded more than 150 million times from Google Play.
The malware masquerading as an ad-serving platform, dubbed SimBad by researchers at security firm Check Point, infected more than 200 apps which, likely unbeknownst to the app developer, would open a backdoor to install additional malware as a way to outsmart Google’s app store scanning. Once installed, the downloaded malware also removes the app icon and persists in the background, loading each time the device boots up.
Once the malware retrieves its instructions from the command and control server, the malware runs through lists of web addresses in the background, serving ads to generate fraudulent revenue.
Check Point provided a list of the apps, which Google pulled from Google Play following a disclosure by the security researchers. The list can be found here. Google’s removal from the app store does not delete the app from users’ devices.
The top ten downloaded games amount to 55 million downloads alone:
- Snow Heavy Excavator Simulator (10,000,000 downloads)
- Hoverboard Racing (5,000,000 downloads)
- Real Tractor Farming Simulator (5,000,000 downloads)
- Ambulance Rescue Driving (5,000,000 downloads)
- Heavy Mountain Bus Simulator 2018 (5,000,000 downloads)
- Fire Truck Emergency Driver (5,000,000 downloads)
- Farming Tractor Real Harvest Simulator (5,000,000 downloads)
- Car Parking Challenge (5,000,000 downloads)
- Speed Boat Jet Ski Racing (5,000,000 downloads)
- Water Surfing Car Stunt (5,000,000 downloads)
Some of the games, mostly simulation games — hence the malware’s name — date back on Google Play to March 2017, said Aviran Hazum, mobile threat intelligence team leader at Check Point, in an email to TechCrunch.
Hazum said the malware might be an adware for now, but has the potential to evolve into a larger threat.
A Google spokesperson, when reached, did not respond provide comment. The search giant typically doesn’t discuss app removals, largely because it’s an issue that keeps occurring. It’s far from the first time Google was forced to remove apps from its supposedly vetted app store. But time and again, the company had to react to dozens of bad apps that slip through its scanning efforts.
Google’s official figures put the number of apps it removed las year at about 700,000.