Note: This blog post is a collaboration between the Malwarebytes and HYAS Threat Intelligence teams.
Classifying Magecart threat actors is not an easy task due to the diversity of skimmers and their reuse. The effort of attributing Magecart to “groups” started with RiskIQ and Flashpoint’s comprehensive Inside Magecart report released in fall 2018, followed by Group-IB several months later.
Much more recently, information about the actual threat actors behind groups has come forward. For example, IBM publicly identified Group 6 as being FIN6. This is interesting on many levels because it reinforces the idea that existing threat groups have been leveraging their past experiences to apply them to theft in the e-commerce field.
One group that caught our interest is Group 4, which is one of the more advanced cybercriminal organizations. While working jointly with security firm HYAS, we found some interesting patterns in the email addresses used to register domains belonging to Magecart matching those of a sophisticated threat group known as Cobalt Group, aka Cobalt Gang or Cobalt Spider.
In the Inside Magecart report, Group 4 is described as advanced and uses techniques to blend in with normal traffic. For instance, Magecart will register domain names that appear to be tied to advertisers or analytic providers (see IOCs for Cobalt Group domains identified using this TTP and naming convention). Another interesting aspect from the report is that Group 4 is suspected to have had a history in banking malware.
One of Group 4’s original skimmers was concealed as the jquery.mask.js plugin (see IOCs for a copy of the script). The malicious code is appended at the end of the script and uses some layers of obfuscation. The hex-encoded data converts to Base64, which can be translated into standard text to reveal skimmer activity and an exfiltration gate.Server-side skimmer
This little code snippet looks for certain keywords associated with a financial transaction and then sends the request and cookie data to the exfiltration server at secureqbrowser[.]com. An almost exact copy of this script was described by Denis Sinegubko of Sucuri in his post Autoloaded Server-Side Swiper.
Both the client-side and server-side skimmer domains illustrated above (bootstraproxy[.]com and s3-us-west[.]com) are registered to firstname.lastname@example.org. They are listed by RiskIQ under Magecart Group 4: Never gone, simply advancing IOCS.
By checking their exfiltration gates (secure.upgradenstore[.]com and secureqbrowser[.]com), we connected them to other registrant emails and saw a pattern emerge.
Email addresses used to register Magecart domains belonging to Magecart Group 4 contain a [first name], [initial], and [last name]. Expanding our search to other domains used by Group 4 and searching through HYAS’ Comox data set, we see this trend continues:
Cobalt Group came to the forefront of public attention in summer 2016 with their “jackpotting” attacks against financial institutions in Europe, which reportedly netted the group over $3 million. Since that time, they have purportedly amassed over a billion dollars from global institutions, evolving their tactics, techniques, and procedures as they go.
While changing tactics as they have evolved, an identifiable pattern in email naming conventions historically used by Cobalt allowed HYAS to not only identify previous campaign domains, but helped link Cobalt Group campaigns to the Magecart domains identified above.
A small shift from one of their previous conventions of [firstname],[lastname], [fournumbers] (overwhelmingly using protonmail accounts, with a handful of tutanota/keemail.me email accounts) changed to the above-noted convention of [firstname], [initial], [lastname] again using the same email services and registrars, and notably the same use of privacy protection services.
Given the use of privacy services for all the domains in question, it is highly unlikely that this naming convention would be known to any other actor besides those who registered both the Cobalt Group and Magecart infrastructure. In addition, further investigation revealed that regardless of the email provider used, 10 of the seemingly separate accounts reused only two different IP addresses, even over weeks and months between registrations.
One of those emails is email@example.com, which was used to register 23 domains, including my1xbet[.]top. This domain was used in a phishing campaign leveraging CVE-2017-0199 with a decoy document called Fraud Transaction.doc.
The same firstname.lastname@example.org also registered oracle-business[.]com. Similar campaigns against Oracle and various banks have been attributed to Cobalt Group, with, for example, the domain oracle-system[.]com.A growing threat requires ongoing work
Based on their historical ties to the space, and the entrance of sophisticated actor groups such as FIN6 and others, it’s logical to conclude that Cobalt Group would also enter this field and continue to diversify their criminal efforts against global financial institutions.
The use of both client-side and server-side skimmers and the challenges this poses in identifying Magecart compromises by advanced threat groups necessitates the ongoing work of industry partners to help defend against this significant and growing threat. On that note, the authors of this post would like to recognize the substantial contribution that industry researchers and law enforcement officials are making to combat groups like Cobalt, and hope that the information contained within adds to this corpus of knowledge and further strengthens these efforts.
Registrant emails associated with Magecart Group 4 domains
Registrant emails associated with Cobalt domains
Cobalt domains registered with Magecart email naming convention
Previous Cobalt domains identified through naming conventions
Working together in perfect harmony like the wind and percussion sections of a symphony orchestra requires both rigorous practice and a skilled conductor. Wouldn’t it be great if our cybersecurity solutions did the same to better protect organizations? The methods and tools used to accomplish this are often referred to as security orchestration.
Even though security orchestration may sound like just another buzz-phrase in the infosec world, it’s a worthwhile methodology to explore when multiple security solutions are necessary to protect organizations against threats. So how can organizations determine whether security orchestration is necessary?
Cybersecurity policies and best practices cycle through various pendulum swings as the market shifts. Once upon a time, IT teams were told it was foolish to run two antivirus programs on one machine. However, for years Malwarebytes proved itself to be an excellent wingman to traditional AV programs—that is, until our software demonstrated that its proactive protection technology was just as innovative as its remediation and could stand alone.
In more recent years, IT teams have been urged not to put their eggs all in one basket with a single, large-scale security suite that could also have a single, large-scale point of failure. This philosophy encourages organizations to layer their protection technologies with vendors that specialize in different areas of defense.
But that can get hairy, too, especially if the various security solutions don’t cooperate or cancel each other out. Therefore, security orchestration has taken hold as a methodology that combines top protection capabilities with simplicity for implementation and use. Organizations looking to cut down on confusion and deploy best-in-class security from one platform might consider choosing products from a limited number of vendors that can integrate with one another through security orchestration software.
To further the analogy, then, the conductor is security orchestration software, while the rigorous practice is all the fine-tuning that is often required of IT and operations teams before it all works as desired.What can security orchestration do for me?
Orchestration in this context implies:
- Solutions working together without interrupting each other
- Streamlining workflow processes so that each component does what it does best
- Unification so that data is exported in a user-friendly and organized manner
Security orchestration is ideally possible even when security software comes from different vendors. However, it often needs to be modified to get the most out of what the solutions have to offer, without one interfering with the effectivity of another.
Security orchestration is often heard in combination with terms such as automated response, which means that the security components work well together and are capable of thwarting low-level threats without human interaction.
In those cases, we can add detection and remediation as tasks that need to be completed in an orchestrated way.
The objectives for security teams to keep in mind are:
- Clarity and simplicity when reviewing suspicious activity or an active attack
- The ability to minimize response and dwell time
- Clear and easy-to-follow rules and protocols in case of an incident
While there are a variety of different use cases for security orchestration, as well as diverse needs to be addressed by different organizations, security orchestration mostly aims to achieve the following goals:
- A single console showing all endpoints and software
- Automated incident response
- Incident response protocols
There are other methods, of course, and what works for one company may not be the perfect solution for another. For example, some organizations may focus on measurements over the long term and will need their information displayed differently from an organization that is only interested in the most recent logs.Difference between SIEM and SOAR
Terms that are closely-related to the subject of security orchestration are security information and event management (SIEM) and security orchestration, automation, and response (SOAR).
Based on our description of security orchestration above, you may wonder how SIEM and SOAR differ. A SIEM platform gathers and makes a first selection of the data that is brought in by the different security solutions, such as AV, firewall, IPS, or other programs. To assess the data and to decide whether any action is required remains up to the operator(s). The analysts will have a toolset to perform further investigation and undertake action when needed.
A SOAR platform is able to take a few of those steps out of the hands of the operators and analysts. SOAR programs can automatically respond to some of the security alerts raised by the correlated data from the SIEM platform.
Dumbing it down, you could say that a SIEM organizes the data gathered by security solutions and creates reports based on those data. A SOAR can take immediate action against detected threats, reducing dwell time by reducing the necessity for human interference. Typically, large organizations will have both a SIEM and a SOAR, as they are not exclusive.What do you look for in security orchestration software?
Before investing in security management, consider the important points listed below. They may not apply to every situation, but are worth mulling over nonetheless:
- Will it scale? If you expect your company to grow, you’ll want your solution to grow along with it.
- Big logs are time consuming. Does the SIEM or SOAR provide the big picture, while letting you drill down if looking for something specific?
- Is the platform versatile? How many programs, operating systems, and security software can it handle?
- Is it compliant with the necessary standards to which you need to adhere?
- Does it provide adequate response time? Security orchestration should enable teams to respond quickly and contain the threat.
- Can you view data in real time? You should be able to see what is going on right now, not just what happened yesterday.
- Are threat analysis and indicators of compromise readily available? In case of trouble, you should be able to compare suspicious activity to known IOCs.
- Is the platform cloud-based or on premises? SIEM and SOAR in the cloud make it easier to scale and troubleshoot, but some teams prefer having control in their own environment.
All the requirements boil down to a few basics: ease-of-use and quick response. All the extras (and you may be surprised by what some security orchestration programs have to offer) are just that—extra. Nice to have, but useless if they don’t meet the basic requirements.Less desirable traits
Measurements of the past can give you an idea of what to expect in the future, but that is not helpful if the solution is unable to respond to an unexpected threat. A multitude of logs and ways to present them are a burden if you don’t have the means to make sense of them.
False alarms are a risk with these programs, but you don’t want them to happen all too often. People will grow complacent and ignore alarms if they expect them to be false again. Your SIEM and SOAR platforms should not cry “wolf.”
SIEM and SOAR solutions should not limit your choice of security vendors. Replacing one solution with another is often burden enough. If you have to rearrange your whole setup to accomplish it, the will to do it will quickly diminish. If you have the luxury of starting fresh, make sure to plan ahead.
You don’t want to have to move to a new house just because you bought a new couch, right? You can introduce a new solution or replace an older one without having to re-think all the others. Installing the new solution and some fine tuning should be enough to get everything on track again.
Your operators and analysts are valuable assets, and you don’t want to keep them occupied with routine chores. You want to free them up for the important work that they do best, and automate the rest. If done right, security orchestration can both keep your team happy and your organization safe.
Stay safe and protected.
It’s no coincidence these two awareness campaigns overlap. What were once seen as separate realities—the physical and the digital—are increasingly blurred as our offices, schools, and hospitals move from paper to screen. Our homes are operationally Internet-connected, and our personal and professional relationships are colored by the way we interact online.
Through the ubiquity of mobile devices and social media, an argument can be made that we’re already living in an augmented reality. And there is no better evidence than the real-life fallout experienced by victims of technological abuse—cyberattacks lead to identity theft and empty bank accounts, frozen assets for businesses, or worse, whole cities shutting down.
But no line is as blurry as the one toed by domestic violence abusers, who use software called stalkerware to leverage their partner’s digital footprint for physical control. And it’s stalkerware that we’re here to talk about—and hopefully eradicate—as we kick off a month of continued awareness and action.
In honor of Cybersecurity and Domestic Violence Awareness months, then, we renew our pledge to fight stalkerware. And we encourage other vendors to step up their efforts so we can work together to stomp out this scourge on the Internet once and for all.What is stalkerware?
Stalkerware is software that was created to monitor a person’s activities on their computer or, more commonly, their mobile device—without that person’s knowledge. Though often advertised as a tool for parents to track their children’s activities, these apps are more commonly used for nefarious purposes.
Stalkerware applications can track unsuspecting victims’ locations, record calls, view text messages, pry into locally-stored photos, and rifle through web-browsing activity, all while hidden from view. To highlight, here is a list of information that stalkerware can gather—all of which can be sent to a remote user—as well as activities an abuser can conduct on a user’s device without their knowing or consent:
- Exact geographic location via GPS
- IP address of device
- SMS message history
- Call history, including call length
- Browser history
- Contacts, including phone numbers and email addresses
- Email account credentials
- Email content from all accounts accessed from device
- Photos, videos, and audio recorded and stored on the device or connected cloud account
- Can take pictures with front/rear camera
- Can record audio via device mic
- Can remotely turn on and off device
Malwarebytes detects stalkerware applications through the longtime mobile threat category monitor, which is a subset of potentially unwanted programs (PUPs). Because some of these stalkerware applications can be used “legitimately,” they are currently flagged as programs users might not want on their phones. However, once presented with what stalkerware can do (or once gaining knowledge of a program that’s been installed on their device without consent), many users will likely want to delete these apps.
These applications represent real-life threats to domestic abuse victims, who can readily be tracked down (along with their children), even when hidden in shelters.How to fight stalkerware
Historically, the cybersecurity industry has turned a blind eye to stalkerware. Because many of these applications are available on legitimate platforms (including iTunes and the Google Play Store) and marketed as harmless child-monitoring software, an argument could be made for their valid existence.
But reaching back more than five years, Malwarebytes has drawn a hard line in the sand about its tolerance for stalkerware. We simply won’t stand for it. We blocked it years ago, doubled our intelligence and detection capabilities back in June, and continue to press for awareness and action from advocacy groups, shelters, law enforcement, and other vendors.
So what can other vendors and individuals do to step up their efforts to fight stalkerware? For starters, many other antivirus companies don’t detect monitoring or stalkerware applications at all. Coming up with rules for stalkerware detection and adding them to their product databases can help users on any security platform better protect against these threats.
Second, spreading awareness about these types of apps and how to protect against them is key. Users should Google and Google some more to learn all they can on stalkerware. We’ve linked many of our own articles in this blog, for starters.
Advocates should listen closely to their victims who are being tracked through their phones—does it sound like they have a stalkerware problem? If so, download security apps that can scan for and remove these threats and other forms of surveillance, including spyware.
For other ideas on what cybersecurity companies could do to fight stalkerware, take a look at what we’ve done so far in 2019:
- Analyzed more than 2,500 samples of programs that had been flagged in research algorithms at potential monitoring/tracking apps, spyware, or stalkerware
- Grown our database of known stalkerware to include over 100 applications that no other vendor detects and more than 10 that are, as of presstime, still on Google Play
- Developed a set of awareness blogs for domestic abuse survivors and advocates on what to do if they have stalkerware on their phones and how to protect their data
- Spoken with local nonprofit and advocacy groups about stalkerware and how to protect against it, as well as shared intel with local law enforcement and attorneys general
- Presented at the National Network to End Domestic Violence’s annual Tech Summit, with information on protecting both domestic violence survivors and the advocates who are with them in the field
- Released Malwarebytes Browser Guard, which protects against tracking applications and extensions used on browsers
- Partnered with other vendors and domestic violence awareness advocates on creating avenues for intel-sharing, definition of the threat, and underscoring that this issue is deeper than owning proprietary signatures and detections
While we’ve committed to kicking stalkerware’s ass over the last five plus years, our work is far from over. Over the next month, we plan to follow up with articles on how individuals and organizations can do their part to better understand this threat and the way it can be used to endanger people’s safety. We’ll also continue with local and national outreach efforts, hoping to both equip advocates with technological understanding and learn from victims themselves what else can be done to support their needs.
At the center of themes regarded as important and relevant today—privacy, technological autonomy, and civic responsibility—sits stalkerware and the cybersecurity community’s response to it. We must band together to squash this threat instead of fluffing it off in favor of “sexier” and scarier-sounding malware. We must pay more than lip service to defending users from physical harm, instead offering solace and protection for those in need. And we must use the full capabilities of our technology to keep users safe from stalkerware, even if it doesn’t directly impact us.
We know what we’ll be doing at Malwarebytes to fight stalkerware. We hope you’ll join us in the fight.
The post For Cybersecurity and Domestic Violence Awareness months, we pledge to fight stalkerware appeared first on Malwarebytes Labs.
Last week on Labs, we highlighted an Emotet campaign using Snowden’s new book as a lure, discussed how 15,000 webcams are vulnerable to attack, how insurance data security laws skirt political turmoil, and how the new iOS exploit checkm8 allows permanent compromise of iPhones.Other cybersecurity news
- Google said its quantum computer outperformed conventional models, but it will still be years before it will end encryption. (Source: Wired)
- Google quietly removed at least 46 apps from the Play store belonging to iHandy, a major Chinese mobile developer due to “deceptive or disruptive ads.” (Source: BuzzfeedNews)
- The data breach at food delivery company DoorDash affected 4.9 million customers, drivers, and merchants. (Source: CNet)
- At DefCon, researchers assembled over 100 voting machines and hackers broke into every single one of them. (Source: Mother Jones)
- New York’s attorney general has filed suit against Dunkin’ Donuts over the company’s alleged failure to notify its customers of a cyberattack. (Source: The Hill)
- 27 countries have signed a joint agreement on what constitutes fair and foul play in cyberspace, with a nod toward condemning China and Russia. (Source: CNN)
- The US Senate passed the DHS Cyber Hunt and Incident Response Teams Act (S.315) in response to rampant ransomware attacks. (Source: BleepingComputer)
- Thousands of Windows computers across the world have been infected with Nodersok, malware that downloads and installs a copy of the Node.js framework to convert infected systems into proxies and perform click fraud. (Source: ZDNet)
- Social media platforms will be forced to disclose encrypted messages from suspected terrorists, pedophiles, and other serious criminals under a new treaty between the UK and the US. (Source: The Times)
- Microsoft has blacklisted the CCleaner utility and links to the software can no longer be posted in its support forums. (Source: BetaNews)
This morning, an iOS researcher with the Twitter handle @axi0mX announced the release of a new iOS exploit named checkm8 that promises to have serious consequences for iPhone and iPad hardware. According to the Tweet, this exploit is a “permanent unpatchable bootrom exploit,” capable of affecting devices from 4S up to the iPhone X.
But what, exactly, does this mean? First, let’s explain what bootrom is. A bootrom is a read-only memory chip containing the very first code to load when a system starts up. Since bootrom code is the core of the device’s startup process, and it shouldn’t be possible to change it, compromising that code is the Holy Grail of hacking.
Although such chips can be “flashed” to change their code, that should require a level of physical access to the chip that is not supposed to be possible. For example, if you had access to a blank chip at the factory, with the right hardware, you could write whatever you wanted to it. It certainly shouldn’t be possible via software.
And yet, according to @axi0mX, that is not only possible, but the code needed to do so is now freely available on GitHub.
This exploit is not a jailbreak, which would provide the capabilities to install arbitrary software, get root permissions, and escape the sandbox. However, it would lower the bar for jailbreaking the device significantly, and is particularly concerning because of the fact that it is located in a place where it can’t be fixed without replacing the hardware.
If you’re an iOS security researcher, this will likely be the most exciting thing you’ll hear all year—possibly even for your entire career to-date. However, I foresee a lot of fear, uncertainty, and doubt among most other people reading this news. So, what’s the real-world impact of this release?Devices affected
The devices that are vulnerable to checkm8 include the following:
- iPhones from the 4s up to the iPhone X
- iPads from the 2 up to the 7th generation
- iPad Mini 2 and 3
- iPad Air 1st and 2nd generation
- iPad Pro 10.5-inch and 12.9-inch 2nd generation
- Apple Watch Series 1, Series 2, and Series 3
- Apple TV 3rd generation and 4k
- iPod Touch 5th generation to 7th generation
This is probably not an exhaustive list, and as @axiOmX mentions, more will be added.
However, the version of iOS/iPadOS/watchOS/tvOS should not matter at all, as Apple will not be able to patch this in software updates. Only purchasing a whole new, updated device would fix the problem. Apple’s A12 and later chips, used in newer devices (iPhone Xs, iPhone XR, iPhone 11 series, 3rd generation iPad Pros) are not vulnerable.Implications
It’s important to understand that checkm8 is not a remote exploit. To compromise your iPhone, an attacker would need to have it in his hands physically. That’s the upside. The downside is that, although this has not been confirmed yet, it appears that the exploit does not require the device to be unlocked.
At this time, it’s not known if an attacker is able to compromise the bootrom of an iOS device by tricking users into visiting a malicious website. The key word, though, is “at this time.” It’s entirely possible someone could chain together other vulnerabilities in iOS to make remote exploit possible. In fact, judging from some others claiming to have found (and not disclosed) this bug earlier, it may already have happened and been kept secret.
It’s hard to find any good news here for Apple and iOS users, but there are a few positives. First, this exploit hasn’t been weaponized yet, as far as anyone is aware. Though, of course, it could already be in secret use by criminals, forensics companies like Cellebrite and Grayshift, and surveillance companies like NSO.
Second, any means that may exist to make checkm8 remotely exploitable would rely on vulnerabilities in iOS. Although Apple cannot patch the bootrom vulnerability in affected devices, they certainly can patch iOS software. Thus, any such remote exploit chains could be killed off by Apple—once they are discovered by Apple, of course.
Third, many files on the device will be encrypted. Even if the device is compromised at the hardware level and jailbroken, that doesn’t automatically give the attacker access to the contents of those files. (Of course, it would still be possible to install malware that could potentially get access to the unencrypted contents of those files in the course of normal usage of the device.)
Finally, if you’re lucky enough to have the latest hardware, you’re safe from checkm8. Apple’s king takes the exploit rook for the win.Possible applications
Besides the obvious threat of criminal activity, there are actually some beneficial possible uses of checkm8.
For security researchers, this is a huge boon, which should help them analyze any version of iOS that will run on an iPhone X or older. Since iOS research really can’t be done on a device that hasn’t had security restrictions lifted somehow, this will likely become one of the most important tools in researchers’ toolkits. This can benefit iOS users, as it can enable researchers to locate issues and report them to Apple.
For law enforcement, and the companies that help them unlock iPhones, this is huge. (Assuming, of course, that companies like Grayshift and Cellebrite weren’t already aware of this vulnerability.) The checkm8 exploit could be used to give them a permanent window into all but the more recent devices.
There’s debate as to how beneficial this is for users, though. On the one hand, we want law enforcement to do their jobs. On the other hand, law enforcement abuses are a problem, especially for disadvantaged minorities. Using this exploit as leverage for surveillance or other abuses of privacy rights could leave users with few options to fight back.The reputation of iOS
Following on the heels of the report from Google Project Zero on China’s recent use of 14 different vulnerabilities to infect iPhones owned by Uyghurs with malware, this adds to the tarnish on iOS’ reputation for security. iOS has long been known as the most secure mainstream mobile system on the planet. However, these incidents lead to hard questions about whether that’s still the case.
Of course, Android devices are no strangers to these problems, either. In fact, if you search the Internet for “flash bootrom,” you’ll come up with lots of instructions on how to do this for various Android devices. On the iOS side, up until now, you’d have just found people saying it wasn’t possible.
Still, this is a serious problem. If used in the wild, it will be difficult to determine whether a device has been compromised by checkm8, due to the extremely closed nature of iOS. As with the Project Zero findings from last month, this is yet another reason that Apple needs to provide more visibility into the status of iOS. Even just being able to inspect the list of running processes without jailbreaking would be a move in the right direction.Checkmate for iOS?
Make no mistake, this is a serious issue for Apple and iOS security. checkm8 could potentially usher in a new era of widespread iOS malware and surveillance on all but the most current devices. What’s important to note here is that, so far, checkm8 only represents potential danger. After the initial flurry dies down, we may never hear from this exploit again.
Based on what we currently know, I don’t see checkm8 as something that should drive people away from iOS permanently. Personally, as much as this concerns me, I’ll continue using my iPhone X until I have a bigger reason to upgrade. Perhaps that reason will be new developments in the checkm8 story; or perhaps it will be the inevitable failure of the battery a few years from now. Only time will tell.
The post New iOS exploit checkm8 allows permanent compromise of iPhones appeared first on Malwarebytes Labs.
Across the United States, a unique approach to lawmaking has proved radically successful in making data security stronger for one industry—insurance providers.
The singular approach has entirely sidestepped the prolonged, political arguments that have become commonplace when trying to pass federal and state data privacy laws today.
In California, for example, Big Tech lobbying groups have repeatedly supported legislative attempts to defang and diminish the consumer protections afforded by the state’s landmark data privacy law, the California Consumer Privacy Act.
In Maine, the state’s Chamber of Commerce published narrowly-defined statistics in an attempt to dissuade public favor for the state’s ISP privacy bill, one of several maneuvers that the ACLU of Maine labeled as “gaslighting”—the surreptitious act of purposefully feeding someone false information to destabilize their notions of truth and fact.
Yet, in Michigan, no immediate opposition rose to combat a law that will tighten the cybersecurity protections of insurance providers like Geico, Prudential, Progressive, AAA, Allstate, and Farmers.
The same peace washed over Mississippi earlier this year, when a similar insurance cybersecurity bill, cycling through the state’s legislature, received no comments in the public record, either for or against.
And in just eight days that spanned between July and August, the legislatures in Connecticut, Delaware, and New Hampshire passed similar cybersecurity laws, all aimed at improving the internal cybersecurity controls and processes for most workers that are licensed to sell insurance. Included in the laws are requirements to perform internal risk assessments and to maintain response plans in case of a cybersecurity incident, like a data breach.
While data privacy laws in the US have sparked repeated skirmishes, data security laws for insurance providers are enjoying a summertime ease: A bill gets introduced, is supported, and often receives an unanimous vote in passage.
This isn’t the product of sudden, benevolent bipartisanship across multiple states, though. Instead, it is the product of years-long forward planning and collaboration in the insurance industry, punctuated by a close-to-home data breach.
It is the story of a different kind of lawmaking.Insurance and regulation—a backgrounder
Insurance regulation in the United States is, to put it lightly, strange.
In 1945, following a thorny Supreme Court case about whether or not the sale of insurance services could be labeled as “commerce,” Congress passed the McCarran Ferguson Act. The law, which still applies today, requires that “no Act of Congress shall be construed to invalidate, impair, or supersede any law enacted by any state for the purpose of regulating the business of insurance.”
What that means in practice is that the insurance industry is regulated almost entirely by individual states.
The same cannot be said for nearly any other industry in the United States, from healthcare to finance, both of which have national information security laws that apply to their sectors.
If that isn’t complicated enough, another wrinkle in the insurance industry is who can actually sell it.
In the United States, selling insurance isn’t like selling a used record player on Craigslist—selling insurance requires a license, and, depending on the type of insurance sold, there are different types of licenses. In California, for example, there are licenses for selling life insurance, property and casualty insurance, and accident and health insurance.
The requirements to get licensed also differ from state to state. In Georgia, Hawaii, and Idaho, for example, “licensees”—who are the people required to obtain a license—must get fingerprinted, while the same is not true in Indiana, Kansas, and Nebraska. The number of hours required for pre-exam training also varies, from zero hours required in Alaska to 200 hours in Florida for those who want to sell property and casualty insurance.
Jeffrey Taft, a partner at the law firm Mayer Brown who works in the firm’s financial services regulatory and enforcement group and its cybersecurity and data privacy practice, said that, for decades, insurance companies have simply put up with the state-by-state regulations. But, Taft added, these companies can rely on the help of a group called the National Association of Insurance Commissioners (NAIC) to make sure that every state law comes from an agreed-upon place.
“Historically, how it’s been, every state has its own insurance department, and every insurance company has to deal with 50 states if they’re a national business,” Taft said. “It’s somewhat cumbersome, as you might imagine, but NAIC tries to make it a more streamlined process to make state laws consistent.”
NAIC, which dates back to 1871 (for perspective, Ulysses S. Grant was president), is the association of the chief insurance regulators from each of the 50 states, plus Washington, DC, and five US territories. The regulators routinely work together to establish standards and best practices and to write what are called “model laws,” which are, essentially, draft pieces of legislation which the group publishes and leaves for individual states to adopt as they choose.
These model laws often address a certain need or threat for the insurance industry, from military sales practices to insider trading to corporate governance disclosures.
In 2014, that need was cybersecurity. That year, the NAIC Executive Committee established a “Cybersecurity Task Force” to review the association’s current model laws that touched on information security and consumer privacy protections, and to make a call on how to best address cybersecurity concerns through the association’s own model law process.
Jennifer McAdam, senior counsel for the NAIC, said that, because of the various kinds of important, private data that insurers collect, cybersecurity is a top concern.
“For NAIC members, it was important to address the unique issues insurers face regarding cybersecurity,” McAdam said. “Insurers collect sensitive consumer data including social security numbers, financial account information, and health care data. Because they collect and maintain this kind of data, insurers are at a high risk of being breached.”
One year later, that risk became an unavoidable reality.The Anthem data breach and the Insurance Data Security Model Law
On February 4, 2015, the health insurance company Anthem disclosed that hackers had stolen the records of 37.5 million people. Names, birthdays, Social Security numbers, physical and email addresses, medical IDs, and employment and income information had all been harvested in the attack. Twenty days later, Anthem readjusted its victim estimate. It wasn’t 37.5 million, it was 78.8 million.
In 2016, in continuing its work on cybersecurity concerns, the NAIC task force began drafting its Insurance Data Security Model Law.
“Drafting started a year after Anthem disclosed its massive health care data breach,” McAdam said. “As insurance commissioners from across the country collaborated to address the Anthem breach, they began discussing what kind of model legislation would help them perform their jobs better in the event of future similar breaches.”
The model law took 18 months to draft, during which six versions were shared and opened to comment for a period of about 30–45 days each.
McAdam said that, after the second version of the model law received comments, six priorities were identified:
- State uniformity and exclusivity of the law
- Potential exemptions for licensees that are subject to current federal information security laws
- Whether to include a “harm trigger” in the definition of a “data breach”
- The definition of “personal information”
- Scalability of data protection requirements for smaller insurance licensees
- The oversight of third-party service providers that can access some of the data held by licensees
Starting in November 2016, a smaller “drafting group”—which included regulators from seven states, representatives from nine industry groups, a representative from one consumer group, and a professor of law at University of Connecticut—began ironing out these six priorities.
In February 2017, the drafting group found help in looking to New York. That month, the New York Department of Financial Services released its own rules on cybersecurity, called the NYDFS Cybersecurity Regulation.
The New York regulation addressed some of the very same priorities that the NAIC drafting group was set to solve, including when and how to notify consumers of a data breach, and what type of information to protect, which the NYDFS regulation described as “non-public information.”
After implementing a few ideas from the New York regulation, the NAIC Insurance Data Security Model Law reached a concrete shape.
The model law, in its final version, requires that licensees protect “non-public information,” which is defined as any information that belongs to a consumer which, because of their “name, number, personal mark, or other identifier” can reveal a consumer’s identity when combined with another part of their data, including their Social Security number, driver’s license number, credit or debit card number, and any security code, access code, or password that would permit access to a consumer’s financial account, along with their biometric records.
The model law mandates that licensees perform risk assessments to determine how to best protect their non-public information. Following the risk assessment, licensees should use what they’ve learned to develop on “Information Security Program” that comprises of administrative, technical, and physical safeguards to protect non-public information. Licensees can pick from a variety of measures to protect non-public information, from placing controls on who can access that information, to using encryption, to using multi-factor authentication, to developing procedures for securely disposing of non-public information.
While the above mitigation and protection methods are suggestions, the model law requires that licensees provide cybersecurity awareness training to their personnel, and to “stay informed regarding emerging threats or vulnerabilities.”
Insurance licensees must also practice “due diligence” when hiring third party service providers that can access the insurance licensees’ non-public information.
Further, the model law makes exceptions for companies that are already subject to the Health Insurance Portability and Accountability Act (HIPAA).
In October 2017, the full NAIC membership and plenary adopted the Insurance Data Security Model Law.
The model law proved immediately popular with several states.States take action
On January 23, 2018, two South Carolina lawmakers introduced the South Carolina Insurance Data Security Act into the state’s House of Representatives. In April, the state’s Senate voted unanimously in favor of the law, and on May 3, Governor Henry McMaster signed the act into law.
Ohio adopted its insurance data security law on December 19, 2018. Michigan did the same nine days later, following a legislative process in which several insurance trade associations—all of which represented the businesses that would be subject to new regulations—spoke in favor of the bill. In fact, Michigan’s bill received no industry group opposition; only the Department of Insurance and Financial Services demurred, in testifying “with concerns,” but without opposition.
Mississippi’s governor signed the state’s insurance data security law on April 3, 2019, after the state’s Senate voted unanimously in favor of it. And from July 26 through August 2, Connecticut, Delaware, and New Hampshire adopted their own insurance data security laws.
The state laws differ in small ways, but all of them, except for the Connecticut law, are modeled directly after the NAIC Insurance Data Security Model Law. Connecticut, instead, modeled its law after the New York Department of Financial Services Regulation.Care to take this outside (of insurance)?
The NAIC’s model law, while successful, is the product of a one-of-a-kind framework. Because of a Supreme Court case that flipped the switch on how insurance could be regulated, Congress decided to pass a law to preserve the way it had always been regulated—by the states.
Because those states each have a Department of Insurance and elected Insurance Commissioners, those commissioners can work together to draft laws that can then be taken to their state legislatures. Because industry insiders are involved in the drafting process, there is rarely a case when those insiders oppose a bill based on their own ideas.
All of those ingredients, this time, added up to make better data security laws for one industry. Those same ingredients cannot be added up to make a comprehensive data privacy law in the United States, unfortunately.
If anything, privacy advocates would likely balk at the idea of Big Tech insiders working together to write a bill that regulates their own activities.
In fact, that process has already happened. Earlier this month, some of the largest companies in America published a framework for what they wanted to see in a federal consumer privacy law. Included in their recommendations was the proposal that no private individual could sue them for violating the future law.
How predictable. Who wants to place a bet on whether Congress would unanimously approve such a bill?
So while the model for creating and passing insurance data privacy and cybersecurity laws results in consistent frameworks adopted across individual states, lawmakers cannot heed to the same process for other industries. Instead, they might consider using a different model law, the GDPR, to provide a framework for federal data privacy legislation.
Until then, expect plenty more opposition to data privacy and cybersecurity laws passed in any other states for any other industries.
The post Insurance data security laws skirt political turmoil appeared first on Malwarebytes Labs.
Webcams may have been around for a long time, but that doesn’t mean we know what we’re doing with them. Webcam hacking has been around for equally as long, yet new research from Wizcase indicates that more than 15,000 private, web-connected cameras are exposed and readily accessible to the general public.
So forget hacking, cybercriminals can just take a stroll through the Internet and grab whatever webcam footage they like for the taking.
Malware targeting web cameras is a mainstay of the malicious hacker’s toolkit. Sometimes it’s for profit and blackmail. Often the threat of footage that doesn’t exist is mashed up with old data breaches to force people to part with their money.
Other times, people would hack PCs and reveal shock meme footage on a victim’s desktop, then capture screenshots for posterity, sharing them on hacker forums for giggles and bragging rights.
Mainly, what seems to be happening a lot right now is a whole lot of negligence. People are connecting their cameras to the Internet without any security features enabled. Worse still, many cams don’t have any security features to enable in the first place.A persistent problem
We’ve spoken at length as to why security features aren’t necessarily advertised front and centre in the instructions of IoT devices. Companies want to seduce buyers with cool tools and amazing features, not ram “SET UP A PASSWORD” down their throats on page one of the instruction booklet. It’s strange, considering how safety and security messaging is typically high priority for other products.
When was the last time you saw a car advertised without some sort of passing mention of seatbelts, or how good the rollcage is, or how many airbags they have, or words like “safety for the whole family”? Epilepsy, violence, and adult language warnings are now a prominent feature of video games, movies, and television. Even social media comes with trigger warnings.
Computer equipment, though? Somehow it seems to run the risk of making the cool toys very uncool indeed. You know what’s definitely worse than security warnings all over the place?
Default configurations exposing your webcam’s stream to the whole world.Webcam hacking the planet
Researchers from Wizcase discovered the following:
Around 15,000 webcams located in homes, businesses, places of worship, and many more were placed online without additional security measures. Regions spanned the globe, from Argentina and Brazil to the UK and Vietnam. Both adults at work and children presumably at home were all easily viewable after the cams were accessed remotely. This is a clear privacy and security risk, especially in terms of potential damage threatened by phishing, blackmail, sextortion, and more.
The cams offered up problems such as unsecured P2P networking and lack of password authentication on devices with Universal Plug and Play (UPnP) enabled, and easily guessable default login passwords for admin. In situations where consumers expected products to work “out of the box,” this problem was exacerbated by a lack of security knowledge.
In addition, not only were the cam streams accessible, but there were also other areas where admin could be compromised by webcam hacking techniques. Geolocation and potential control of devices was also possible.
Some of the devices looked at in the research include the following:
- AXIS net cameras
- Cisco Linksys webcam
- IP Camera Logo Server
- IP WebCam
- IQ Invision web camera
- Mega-Pixel IP Camera
- WebCamXP 5
There’s an astonishing amount of personally identifiable information (PII) up for grabs, then, and in many ways and formats. Screenshots, audio, moving images, things consumers shouldn’t be viewing deep in the heart of a business, things you shouldn’t have access to in a home environment—it’s all there.
This certainly isn’t “just” a webcam hacking problem. Harassing toddlers via baby monitors? Sure, those stories come around regularly. Home hubs not locked down as well as they could be? The frankly bizarre sky’s the limit.Webcam security tips
As with most Internet-connected devices, good security practices will help steer you clear of this danger. Keep your system up to date, along with your chosen selection of security tools, and perform regular scans to keep everything in ship shape condition.
If your cam is a USB connected to a desktop, you can always unplug when not in use.
If the cam is integrated into your laptop, you can turn it off completely via Device Manager.
You should also consider adding a webcam cover to your device if it doesn’t have one already fitted. If you need to cover a cam in a hurry, pretty much anything sticky will do the job. Masking tape is absolutely your friend.
If you’re worried about your conversations being recorded, you can also kill off the microphone should you so desire.
Most webcams should fire up a visible light to let you know when they’re in use. Some devices don’t do this, and so Windows 10 has the option to notify you when something is making use of it.
If you think files are being recorded, they could well be stored on your machine somewhere. It’d be well worth having a look around some common (and not so common) file locations. There’s also plenty of programs out there designed to see what’s eating up space on your hard drive, so you could use one of those to look for common video files or other large-sized files.Cheap and nasty?
Standalone cams are notorious for not being secured properly. If you have a cheap IoT device in your home watching over your sleeping toddler, or a few handy cams serving as convenient CCTV when you head off to the shops, take heed. It may be that the price for accessing said device on your mobile or tablet is a total lack of security.
Always read the manual and see what type of security the device is shipping with. It may well be that it has passwords and lockdown features galore, but they’re all switched off by default. If the brand is obscure, you’ll still almost certainly find someone, somewhere has already asked for help about it online.Tuning in to chaos
While this isn’t anything particularly new where webcams and devices in the home are concerned, it’s a timely reminder to be careful about what we invite into our homes. Even the best devices can run into an exploit, and it’s a fact that many webcam devices don’t come anywhere close to being “the best.” Indeed, security researchers run into devices thrown together as cheaply as possible with no thought given to security all the time.
Until security is baked right into these useful yet potentially dangerous tools, and marketing teams realise it’s okay to allow a little drag on the initial user experience to ensure everything is locked down, this will continue to happen.
If you’re unsure about a particular brand, it won’t hurt to have a little dig around online first before purchasing. Pay close attention to security features listed or (more problematically) no security features listed whatsoever. If the device looks appealing and on sale at a surprisingly cheap price, a lack of any brand name listed whatsoever may be the point where alarm bells start going off.
You simply can’t be sure what you’re taking home at that point, and even the various security tips up above may not be enough to keep things safe and clean at all times. Be on your guard, drop some tape on that ever-present eye in the corner of your room, and go about your day. It’s definitely a problem, but it isn’t one you need to let rule your day-to-day online experience.
The post 15,000 webcams vulnerable to attack: how to protect against webcam hacking appeared first on Malwarebytes Labs.
Exactly one week ago, Emotet, one of the most dangerous threats to organizations in the last year, resumed its malicious spam campaigns after several months of inactivity. Based on our telemetry, we can see that the botnet started becoming chatty with its command and control servers (C2), about a week or so before the spam came through.Figure 1: Communications with Emotet C2s over 90 days
To kick off its spam campaign last week, Emotet resumed spear phishing tactics it adopted in late spring 2019, hijacking old email threads with personalized subject lines and appearing as old invoices.
This week, Emotet is trying a different tactic, incorporating the news about NSA whistleblower Edward Snowden’s new book Permanent Record as a lure. The memoir, which is already on Amazon’s bestseller list, has been the subject of intense debates. In addition, the US government is also suing Snowden for violating non-disclosure agreements and publishing without prior approval.
Criminals are known to capitalize on newsworthy events for scams and other social engineering purposes. In this particular case, Emotet authors are supposedly offering Snowden’s memoir as a Word attachment. We collected emails from our spam honeypot in English, Italian, Spanish, German and French claiming to contain a copy of Snowden’s book in Word form.
Upon opening the document, a fake message that “Word hasn’t been activated” is displayed to victims who are prompted to enable the content with a yellow security warning. Once they do, nothing appears to happen. However, what users don’t see is the malicious macro code that will execute once they click on the button.Figure 3: Fake document containing macro code
The macro triggers a PowerShell command that will retrieve the Emotet malware binary from a compromised WordPress site. After infection, the machine will attempt to reach out to one of Emotet’s many C2s:Figure 4: Network traffic upon infection
As each new week rolls in, the threat actors behind Emotet are always punctual with delivering their spam messages, thanks to their large botnet. And once they’ve spammed and infiltrated an endpoint, their work is far from over. As we’ve said before, Emotet is a double or even triple threat if it is not quarantined right away.
Malwarebytes business users and Premium home users are already protected against this threat.Indicators of Compromise (IOCs)
Malicious Word document
Network trafficEmotet: www.cia.com[.]py/wp-content/uploads/2019/09/XNFerERN/ Emotet C2: 184.108.40.206:7080/chunk/window/ringin/ Emotet C2: 133.130.73[.]156 Emotet C2: 178.32.255[.]133
The post Emotet malspam campaign uses Snowden’s new book as lure appeared first on Malwarebytes Labs.
Last week on Labs, we sounded the alarm about the relaunch of Emotet, one of the year’s most dangerous forms of malware, with a new spam campaign. We also reported on how international students in UK are targeted by visa scammers, what CEOs think about a potential US data privacy law, and introduced Malwarebytes Browser Guard. Finally, we looked at the role of data destruction in cybersecurity.Other cybersecurity news
- A streak of hacked celebrity Instagram accounts continues as a group of cybercriminals target them to promote scam sites to their huge number of followers. (Source: BleepingComputer)
- YouTube’s new verification system attempted to ensure that large channels were safe from impersonation but had the unintended consequence of removing other popular channels. (Source: TechSpot)
- Ecuador has begun an investigation into a sprawling data breach in which the personal data of up to 20 million people was made available online. (Source: The New York Times)
- Google has released an urgent software update for its Chrome web browser and is urging Windows, Mac, and Linux users to upgrade the application to the latest available version immediately. (Source: The Hacker News)
- Researchers have uncovered two variants of information-stealing Mac malware that impersonate a legitimate stock and cryptocurrency trading application. (Source: SCMagazine)
- A Bulgarian phishing criminal who created fake versions of legitimate companies’ websites as part of a £40m fraud has been jailed. (Source: The Register)
- Nearly 50 school districts and colleges have been hit by ransomware in 2019 so far, and more than 500 individual K–12 schools have potentially been compromised. (Source: Dark Reading)
- Modern TV has a modern problem: all of your Internet-connected streaming devices are watching you back and feeding your data to advertisers. (Source: ArsTechnica)
- US authorities have indicted two suspects for hacking cryptocurrency exchange EtherDelta in December 2017, changing the site’s DNS settings, and redirecting traffic to a clone. (Source: ZDNet)
- How data breaches forced Amazon to update S3 bucket security. (Source: HelpNet Security)
When organization leaders think about cybersecurity, it’s usually about which tools and practices they need to add to their stack—email protection, firewalls, network and endpoint security, employee awareness training, AI and machine-learning technology—you get the idea. What’s not often considered is which items should be taken away.
Nearly as important to an organization’s security posture is data destruction, or what to do with data when it’s no longer necessary for the company…or when it falls into the wrong hands.What exactly is data destruction?
The word “destruction” doesn’t always carry positive connotations. A person might worry about data destruction if their device fails and they haven’t made proper backups or don’t store their data in the cloud. However, organizations must destroy data on a nearly daily basis, whether that’s deleting emails to clean out an inbox or making room on a database by dumping old, no-longer-relevant files.
In days of yore, destroying data was a fairly simple task. Take old papers and run them through the shredder. Then dump at a recycling facility, wipe your hands, and smile at your empty file cabinets.
Modern data destruction is more complex. Data stored on tapes, disks, hard drives, USBs, and other physical hardware must be purged before old devices are thrown away, re-used, or sold. And data no longer in use that’s stored on networks and in the cloud should be systematically destroyed in the interest of organizing relevant data and keeping it out of the hands of criminals.
It’s a step companies must take whenever they stop using something that holds information. A thorough data destruction process involves making what was formerly on an electronic storage device unreadable. Businesses must do this, no matter if they intend to sell an old storage medium or throw it away.What are the main types of data destruction?
To truly destroy data, merely deleting a file is insufficient. While the file may not be viewable in a particular folder, it is still likely stored in the device’s hard drive or memory chip. Therefore, organizations must take an extra step to ensure the data can no longer be read by an operating system or application.
Companies have a few main choices when deciding how to destroy their data properly:
- Physically destroying the storage medium
Degaussing requires using a special tool called a degausser and choosing one designed for the particular storage device. The degausser removes or reduces the magnetic field associated with the storage disk, which renders the data inside unreadable and unrecoverable.
Overwriting means replacing the old data with new. This method only works when the storage medium is undamaged and writable—and of course when an organization plans to continue using the medium instead of throwing it away or reselling.
Physically destroying the storage hardware usually means striking it with a hammer or taking it into a field with a baseball bat, Office-Space style. This is a costly data destruction method, but one that gives exceptionally high confidence that someone could not access the information later.
There are also other types of destruction options within those broader categories. For example, data wiping is a form of overwriting and erasure is another example.Which cybersecurity risks does data destruction tackle?
A breach is the cybersecurity threat most people probably think of when they ponder what could happen due to insufficient data destruction. Most organizations collect and store sensitive or personally identifying information on its employees and customers, for example. Yet, once those employees or customers move on, businesses may hold onto their data for a little while but eventually want to remove it from their systems so they are not liable for fallout from a breach.
Cybercriminals look to compromise organizations for this very reason; and they do not limit their efforts to data being actively used by an organization. Data at rest, in storage, and in transit are all at risk. And threat actors know that users and organizations often rid themselves of physical devices without completely wiping them of data. According to the BBC, 1 in 10 second-hand hard drives still contain users’ old information.
Obtaining the data may also happen innocently. An individual could buy a USB drive from a third-party source and notice there’s still information on it when they plug the device into a computer, for example. A person could also gain access to sensitive data by noticing that a company is throwing away some hard drives in an easily accessible dumpster, and take the disks out of the receptacle later.
Outside of the data breaches, organizations may be fined for mishandling the information in their care. Businesses can incur millions of dollars in penalties once regulators conclude they’re not meeting minimum standards for data safekeeping.
An IT company called Probrand conducted a data destruction poll a couple of months after the General Data Protection Regulation (GDPR) came into effect. It showed that 71 percent of United Kingdom trade sector businesses did not have an official protocol for getting rid of old computer equipment. Then, 47 percent of respondents admitted they would not know which person in their organization to approach about data destruction.
Companies cannot view data destruction and cybersecurity separately. They go together, and if an organization doesn’t take it seriously, its cybersecurity plan falls short, particularly when it comes to safeguarding information. Enterprises should consider a top-down approach when protecting and disposing of data—especially when the GDPR or other regulations apply to them.What should organizations consider when choosing a destruction method?
Although the data destruction techniques mentioned above encompass the main options available to organizations, that doesn’t mean companies do or should choose only one option and use it for all cases. Instead, they need to think about time, cost, and and the validation and certification associated with each method.
Time comes into play because some techniques take longer than others to ensure old information is completely gone. The number of devices or drives an organization wants or needs to destroy at once also matters. For example, if a company only needs to delete the data from one or two endpoints, that’ll be a much shorter demand on time compared to dealing with hundreds of machines.
Cost is mainly a factor to keep in mind if an enterprise intends to use the hard drives again for different purposes, or it has limited financial resources. Perhaps their budgets do not allow for getting replacement computers, making physically destroying a hard drive out of the question.
Validation and certification are related. They address how companies many need to work with data destruction service providers that can validate their methods and provide certifications after doing the job. Having a certificate helps a business show its compliance.
For advice on which methods to follow in which scenarios, the National Institute for Standards in Technology (NIST) has published guidelines for data sanitation. Organizations are not legally required to follow the standards put forth by this US Department of Commerce–sponsored report, but they are helpful in outlining best practices for protecting data from infiltration, abuse, misuse, theft, and resale.Should destroying data be high priority?
IT executives have a growing number of challenges to overcome regarding cybersecurity. Some of them may wonder if data destruction (or lack thereof) is a genuinely confirmed risk or merely a theoretical one. Substantial evidence shows that companies cannot afford to overlook data destruction as they iron out their cybersecurity plans.
Matt Malone is a dumpster diver who confirms that many hacks and identity thefts occur when people go through someone’s trash. Malone often targets the dumpsters of retailers and said that off-hours activity made more money for him than his day job.
Also, a tech company called Stellar performed a residual data study in 2019 that analyzed the information left on 311 devices. It found that more than 71 percent of them contained personally identifiable information (PII). Additionally, 222 of the devices went to the secondary market without their original owners conducting the appropriate information-erasing procedures first.
An earlier study from the National Association for Information Destruction revealed that 40 percent of devices received secondhand had PII on them. Researchers looked at more than 250 items for the study.
Furthermore, research published in 2015 highlighted the need to work with reputable data destruction companies that stand behind their results. The study examined 122 used devices bought from e-commerce sites. In addition to 48 percent of the hard drives containing residual data, 35 percent of the mobile phones had information such as call and text logs, images, and videos.
Even worse, previous deletion attempts occurred on most of the devices— 75 percent of the hard drives and 57 percent of the mobile phones. A closer look told the researchers that people tried to delete the information with widely available but unreliable data destruction methods. A lesson learned here is that it’s crucial to weigh the pros and cons of each option before tasking a reliable company with discarding the information.Data destruction should not be overlooked
Cybersecurity is a hot topic for organizations, which are increasingly being targeted by cybercriminals for their troves of valuable PII. Data that is no longer useful to an organization is still a goldmine for threat actors. As the saying goes: One person’s trash is another person’s treasure.
And while organizations might spend a fortune on protecting their active data from getting into the wrong hands, what’s often overlooked is how inactive or old data is improperly secured or destroyed. Removing all traces of old data is important for saving consumers from continued exploitation, plus it sends a message to criminals that your organization has air-tight defense—even around its dumpsters.
The post What role does data destruction play in cybersecurity? appeared first on Malwarebytes Labs.