A type confusion bug in Microsoft Edge and Internet Explorer remains unpatched as Microsoft doesn’t consider it a security vulnerability, Cybellum reveals.
The issue was reported to Microsoft on August 21, 2017. The researchers say that while Microsoft has confirmed the vulnerability, it decided against releasing a patch for it, because of the special conditions required to reproduce it. Specifically, it requires developer tools to be opened.
Affecting the latest versions of x86 Edge and x86/x64 Internet Explorer, the vulnerability occurs in the layout rendering engine (EdgeHTML & MSHTML), and the security researchers claim that, with some additional work, it would be possible to reproduce the crash without the developer tools.
“The type confusion occurs in the function window.requestAnimationFrame, which expects to receive a single function pointer parameter. The vulnerability occurs because the function doesn’t properly validate the parameter, and may be called with a value that is not a function pointer (an integer value),” the researchers say.
Being treated as a pointer, the supplied integer goes through a series of dereferences and a compare function and, if all the requisites are satisfied, the function performs one more dereference and returns that value to the caller. The function pointer is checked against CFG protection and, if the test succeeds, the function pointer is called, thus providing the attacker with full control over EIP, Cybellum says.
The researchers argue that the only prerequisite required for the vulnerability to be triggered is to make the function AreAnyListenersFastCheck@CDebugCallbackNotificationHandlers return “true”, mainly because the actual vulnerability happens in the function BeforeInvokeCallbackDebugHelper@CAnimationFrameManager.
According to them, the easiest way to trigger the vulnerability is to open the developer tools, but that exploitation doesn’t have the same requirement. Moreover, the researchers claim that the bug was discovered with developer tools turned off, and that viewing the page source can also trigger it.
To exploit the bug, an attacker would need to successfully control a series of dereferences, successfully bypass\satisfy the CFG protection check, gain full control over EIP, and continue with standard exploitation until code execution has been achieved.
“Practical exploitation of this vulnerability isn’t trivial, and might require combining it with another vulnerability (e.g. an info leak). That said, this is a great starting point for a multi-browser (Edge\\IE) remote code execution exploit,” the researchers say.
The security researchers note that they reported the bug to Microsoft on August 21, but that the tech giant informed them on August 30 that, “because of the hard requirement to open developer tools in order to manifest the issue, this submission doesn’t meet the bar for servicing via security update, and will not be assigned a CVE.”
However, the company suggested that typical user behavior won’t involve opening the developer console while browsing. “While we understand that users can be tricked into opening this, we don’t believe that this will be a common scenario for a typical user,” Microsoft reportedly said.
Nonetheless, the tech company agrees that opening the Developer Console alters the application and puts it in a less secure state and that the issue needs to be addressed. Moreover, Microsoft also told Cybellum that a future version of Internet Explorer will resolve the issue.Infosec Island
No matter how much security technology we purchase, we still face a fundamental security problem: people. This is a realization that we’ve been grappling with as an industry for quite a while. As a security practitioner and during my time as a research analyst and industry adviser at Gartner, I spent countless hours evaluating security technologies and helping organizations decide which technologies and products would best enable them to secure data. But one malicious or negligent human can often intentionally or unintentionally nullify the effectiveness of technology based controls. The truth is that humans are both our biggest threat and they serve as our last line of defense.
To make humans an effective last line of defense, we need to first grapple with two disturbing truths: 1) All humans are master deceivers; and, 2) we are all easily deceived.
Let me explain… Each of us are trained in the ways of deception beginning early in our childhood. Early-on we were taught that lies make life easier and social situations more comfortable. Do you remember going to family reunions and being told to just act like you enjoy being there? Shake crazy uncle Bob’s hand even though he freaked you out? Eat your peas without complaining? And as we get older, we refine the talent even more – from learning how to expertly navigate questions like, “do these jeans make my butt look fat?” to when your significant other asks if you like their new hair style, to when your boss asks for your ‘honest opinion’ about his/her new strategy.
And those are just lies from the ‘little white lie’ category; we haven’t even started to get into the whoppers that we and others tell to hide things, get away with things, trick people, cheat, mislead, and outright steal from each other. And yet, we all know people who have believed both benign and malicious lies. And – if we are truthful with ourselves – we’ll even admit that each one of us has been deceived badly more than a few times over the course of our lives.
If I were to give the main reason that we fall for scams, social engineering and the like, it is because our brains our easily fooled. Our brain’s job is to filter and present reality. Each of our brains take-in a massive amount of input and then decide what is important, what the implications are of the input, and what (if any) response is needed. And our brains do that very efficiently by employing several shortcuts. Over the millennia, magicians, pickpockets, con-artists, scammers, and others have learned how to hijack these mental shortcuts and use them to their advantage.
In my keynotes, I love using examples from magic, pickpocketing, and hypnosis to quickly and easily demonstrate how our brains can be manipulated. In this article, we don’t have the benefit of many of the visual aspects of what I’d usually demonstrate, however I’ll do my best to provide some of the high-level theory and principles.
Principle 1: Misdirection and attention
Our brains are programmed to constantly scan and determine what to ‘lock on’ to. This is referred to by brain scientists as our “spotlight of attention." Magicians and pickpockets are masters at exploiting vulnerabilities in our attentional spotlight. They will draw your attention to one object or area while doing the ‘dirty work’ at the periphery or completely outside of the attentional spotlight. They frequently use a large visible action to cover for a smaller action.
We think that we are masters of our attention, but it is extremely easy for our attention to be hijacked. Unfortunately, it isn’t just illusionists that know and take advantage of this; criminals and scam artists do as well. The world is still recovering from NotPetya. This malware was originally widely believed to be what it appeared to be – ransomware. However, it was even more malicious. It was a wiper disguised as ransomware and very likely initiated as a state sponsored cyberattack.
Another example of misdirection in the cybersecurity world is when attackers launch a DDoS attack against a financial services company to cause diversions from the account takeover attacks. The end user and the bank see the extremely visible effects of the DDoS attack, and the account takeover and fraud activities are obfuscated for a time.
Principle 2: Influence and rapport
Another principle that comes into play when hijacking our brains is that of influence and rapport. Hypnotists, magicians, pickpockets, as well as criminals and con-artists are all masters at pulling the levers of influence and building rapport. Street and stage magicians, hypnotists, and pickpockets work to ensure that their participants quickly form a level of trust. This allows them to gain complicity as the performer shows them where to stand, what to do, and so on.
Robert Cialdini, Regents' Professor Emeritus of Psychology and Marketing at Arizona State University, wrote Influence: The Psychology of Persuasion, what is most often referred to as the definitive book on how influence works. Cialdini's theory of influence is based on six key principles: reciprocity, commitment and consistency, social proof, authority, liking, scarcity. He also recently added a seventh principle: the unity principle. The principle is about shared identities; what Seth Godin would refer to as Tribes. The more we identify ourselves with others, the more we are influenced by them.
Rather than describing each of the influence factors here, I encourage you to review Chaldini’s work, or one of the many derivative works based on his research. Needless to say, however, scam artists and phishers around the world leverage many of these tactics as they bait their hooks! The influence tactics are also additive; meaning that a savvy phisher will employ multiple influence tactics within a single message to make the lure attractive. For instance, if a phisher creates a message using scarcity/urgency, authority, social proof, and reciprocity all in one phishing email, they bring more fire power to their message than a simple message that uses none or only one of the tactics.
Principle 3: Framing and context
Framing is of critical importance for performers, politicians, and marketers… as well as social engineers and con-artists. The concept of framing is derived from the social sciences. And it is basically the context, world view, or lens that a person views reality (or a specific situation) through. Framing can also be a social engineer or attacker’s way to hide in plain sight (costuming, persona development, and playing to the situation).
An example of framing that I use in presentations is where a specific effect can be presented in multiple ways depending on the frame that I’m trying to play to. For instance, if I have a sealed envelope that contains a written record of a participant’s upcoming choice, I can reveal that as either a prediction (if I want to play the part of a psychic) or as an example of how I can influence the participant to think or choose something (playing the part of a mentalist or Svengali-like hypnotist).
Simply stated – a frame gives us the context to interpret or understand the information we are presented or the situation in which we find ourselves. In fact, there are political, religious, and marketing organizations all dedicated to understanding the frames that people have and how to work within or to expand those frames so that people are open to new or different/challenging ideas. Frames are an extremely powerful force – and they are not always fact-based. When frames and facts collide, the facts are pushed aside and the frame is embraced tightly. FrameWorks President Susan Bales is known to often say, “When the facts don’t fit the frame, the facts get rejected, not the frame.” (PDF)
Since everything operates within a frame, scammers, phishers, con-artists, and other unsavory types learn how to play to the frame. They will impersonate respected authority figures – such as in Business Email Compromise attacks. Framing also takes place in the way that language is used, the choice of medium for an attack, and more. For a great breakdown of framing in the context of social engineering, I encourage you to read the ‘Framing’ entry in the ‘Influencing Others’ section of The Social Engineering Framework at Social-Engineer.org.
Understanding how our brains can be used against us is a critical first step in learning how to combat the attacks of savvy attackers. The immediate take-away is that we need to give ourselves permission to slow down and think before acting. Doing so takes us out of situations where we are just acting in a reflexive/automatic manner and allows us to process things a bit more logically. Then we can mentally rewind the actions and potential motivations behind what people are saying, the emails that we are receiving, and situations that we are in to see if someone might have just tried to hijack our brain.
About the author: Perry Carpenter is Chief Evangelist and Strategy Officer at KnowBe4. Previously, Perry led security awareness, security culture management, and anti-phishing behavior management research at Gartner Research, in addition to covering areas of IAM strategy, CISO program management mentoring, and technology service provider success strategies.Copyright 2010 Respective Author at Infosec Island
A recently observed phishing campaign was abusing compromised LinkedIn accounts to distribute phishing links via private messages and email, Malwarebytes warns.
The attack abuses existing LinkedIn accounts to distribute the phishing links to their contacts, but also to leverage the InMail feature to target external members. The campaign abuses long standing and trusted accounts, including Premium membership accounts that can use the InMail feature to contact other LinkedIn users.
The fraudulent message claims to link to a shared document but instead redirects to a phishing site for Gmail and other email providers. To ensure that victims don’t immediately realize they’ve been scammed, a decoy document on wealth management from Wells Fargo is displayed after the user is asked to input their username, password, and phone number.
The phishing message Malwarebytes has encountered came from a trusted, compromised contact and contained a link to a so called shared Google Doc. The Ow.ly URL shortener is used to hide the true URL used in the scheme, a method employed many times in previous phishing campaigns. Free hosting provider gdk.mx was also abused to redirect to a phishing page hosted on a hacked website.
The analyzed page was built as a Gmail phish, but also asks for Yahoo or AOL user names and passwords. It also asks users to input their phone number or a secondary email address before displaying the decoy Wells Fargo document to them.
Messages sent via InMail, which allow premium LinkedIn members to contact users who aren’t in their network, include a security footer message with the user’s name and professional headline, so that other members can distinguish authentic LinkedIn emails from phishing email messages. However, the platform also warns users that they can’t trust the content of these messages, even if they are sent via LinkedIn.
Malwarebytes also points out that the use of InMail, which requires a Premium account, comes at a hefty monthly cost. While spammers were seen upgrading free accounts only to send spam messages, the method couldn’t be used in large scale attacks, due to limited InMail credits.
“This limitation does not apply here though since the crooks are not creating (and paying for) their own accounts, but rather leveraging existing ones. Therefore, they have little to worry about burning free credits and tarnishing their victim’s reputation so long as it allows them to deliver their payload far and wide,” the security researchers note.
According to Malwarebytes, the number of compromised accounts isn’t known and is also unclear how the impacted LinkedIn accounts were compromised. Attackers might have abused the large scale LinkedIn breach that was disclosed last year, but could have also gained access to the compromised account by using data from other major data breaches.
“It’s also unclear whether the shortened URLs are unique per hacked account or not, although we think they might be. The user whose account was hacked had over 500 connections on LinkedIn and based on Hootsuite‘s stats, we know 256 people clicked on the phishing link,” Malwarebytes says.
LinkedIn members who have been compromised should immediately review their account’s settings, change their password and enable two-step verification to prevent further compromise. They are also advised to warn their contacts of the compromise, as previous messages could be part of similar phishing attempts.Infosec Island
Several utility applications distributed through Google Play have been infected with the BankBot Android banking Trojan, TrendMicro reports.
Initially spotted in the beginning of 2017, when its source code leaked online, BankBot has been highly active throughout the year. Between April and July, the malware managed to slip into Google Play via infected entertainment applications or posing as banking software, and has recently switched to utility apps, it seems.
Designed to steal users’ online banking credentials via phishing pages, the malware can request admin privileges on the infected devices to perform its nefarious routine. In addition to stealing login credentials, it can also intercept and send SMS messages, retrieve contacts list, track the device, and make calls.
According to TrendMicro, the malware managed to infect four utility apps in Google Play, and might have impacted thousands of users. One BankBot application, the security researchers reveal, has been downloaded over 5000 times.
The same as previous variants, the new Trojan iteration targets legitimate banking applications. However, the security researchers noticed that, while it still targets banks in 27 countries, the variant has added phishing pages for ten more United Arab Emirates (UAE) banking apps.
On the infected device, BankBot checks the installed software and, if it finds a targeted banking app, it connects to the command and control (C&C) server and upload the target’s package name and label. The server responds with a URL download the library containing the files necessary for the overlay page displayed on top of banking apps.
The overlays have been designed so that the users believe they have accessed the legitimate pages. Thus, they input their login credentials without realizing the page is fake.
BankBot also packs a series of evasion techniques, and won’t work unless it runs on a real device and if the targeted banking app is installed. It also avoids devices located in the Commonwealth of Independent States (CIS) countries.
When it comes to UAE banking apps, the malware also performs an additional step, prompting users to enter their phone numbers. The server then sends a code to the victim via Firebase Message and the victim is instructed to input bank details only after providing the pin. However, even if the bank information is correct, , BankBot shows an “error screen” and asks the user to input the credentials again.
“Apparently, the author of BankBot wants to verify the banking details of their victims. They ask for the details twice, just in case users input it incorrectly at first. BankBot will send the stolen data to the C&C server only after account information is entered twice,” Malwarebytes says.
BankBot’s widened reach and the fact that it is experimenting with new techniques are concerning, TrendMicro points, out, citing research claiming that mobile banking users in the Middle East and Africa are expected to exceed 80 million by 2017.Infosec Island
Experienced engineers know to “fail safe” the systems they design. This basic principle merely says that a system remains in a safe state in the event of any failure. For data security systems, this means that the sensitive data should remain inaccessible if anything goes wrong with the system. The simplest way of accomplishing this is data encryption.
Unfortunately, enterprises often overlook this critical principle when securing their data, even when the data is stored in the cloud or shared with 3rdparty service providers. In these scenarios, the scope of potential failures increase as enterprises lose control and visibility over their data. Take Verizon as an example. They shared customer data with NICE Systems for the purpose of customer analytics. That data included customer information in unprotected form. As a result, when NICE accidentally put the information in a misconfigured S3 bucket in the Amazon cloud, that information was available for the whole world to see.
What happened to Verizon is just a trivial example of how security systems could fail with an honest mistake. Any CISO will tell you that their IT systems are constantly being attacked every day and that their employees are regularly receiving phishing emails. These events represent efforts by malicious parties to actively create failure in enterprise security systems, and the reality is that they only need to succeed once. The question, then, is what those hackers or rogue insiders see when they have circumvented the firewalls, evaded the monitoring, and bypassed access controls. Do they see valuable data ready for the taking, or are they confronted with an encrypted blob that encourages them to give up and seek other targets?
Given the fundamental role encryption can have in securing data, the use of data encryption is still surprisingly low. Nearly every data breach disclosure has indicated that the data was not encrypted. Of course, this may be due to the fact that many regulations do not require a disclosure when the data is encrypted. Despite this safe harbor, however, there’s still breaches disclosed weekly and hundreds of millions of records lost every year. Clearly, many are still not getting the message.
The reasons given for not using encryption are many: encryption is too complex, its overhead is too high, key management is tricky. Furthermore, for cloud and 3rdparty use cases, traditional data-at-rest encryption appears more for meeting bare minimum compliance requirements rather than securing data. Fortunately, encryption solutions have made great strides. In addition to traditional storage and database encryption, application-level encryption options are more readily available and can protect data at a columnar granularity. Encrypting at the application level allows enterprises to maintain control and visibility of sensitive data values even if data is uploaded to the cloud or shared with 3rdparties.
Modern application encryption solutions reduce the application development effort by taking care of key management, monitoring, and reporting. The best solutions eliminate the need for application code changes and make encryption an operational exercise rather than create new development work. By making data security part of the operational process, enterprise can create a uniform agile encryption strategy that can quickly adapt to new security and compliance requirements while focusing their application development team on their core business requirements.
Whether ephemerally or permanently, data will be shared with 3rd parties and/or stored in the cloud. It behooves enterprises to protect that data with application encryption to ensure that there is a last line of defense against any failure in the data security system.
About the author: Min-Hank Ho has been developing enterprise security solutions for over 16 years and currently leads engineering for Baffle, Inc. Prior to joining Baffle, he led the development of Oracle Advanced Security and Oracle Key Vault, widely used data security products for enterprises with Oracle databases.Copyright 2010 Respective Author at Infosec Island
Small business owners all-too-frequently believe that they won’t be targeted by hackers because they don’t offer anything of interest to cybercriminals. Since mainstream media outlets tend to solely focus on the “spectacular” large corporate and government breaches, it’s somewhat understand that this misconception continues to fester. But that narrative may be starting to shift – at least a bit.
The U.S. Securities & Exchange Commission recently stated that SMBs are “at even greater risk, and are far more vulnerable once they are victimized.” As the volume of attacks and lucrative profits continue to grow, all business owners – from Fortune 100 companies to small family-owned businesses – need to get serious about defending their business websites from being compromised.
A 2016 SEC report projects that “cybercrime will cost the world in excess of $6 trillion annually by 2021, up from $3 trillion in 2015.” Even with these skyrocketing figures affecting every sector of the global economy, mostly only large corporations have made significant progress toward mitigating this threat. Either by refusing to admit that they will be targeted or insisting that they already have sufficient protection, SMBs are still largely in denial about the clear fact that a business remains vulnerable as long its website remains unprotected or unmonitored.
Small business owners often aren’t aware of the fluid and dynamic nature of discovering and disclosing vulnerabilities, and how this causes both updated and outdated website platforms to be at risk. According to a spokesperson for the Small Business Administration (SBA), companies that used Web Content Management Systems face even more acute threats, as “at any given time between 70 to 80-percent of users are running outdated versions of WordPress – leading to critical and well documented vulnerabilities.”
An owner of a typical small business site reviews web traffic figures daily, and they are often pleased to notice any increase in volume. However, analysis from multiple independent studies illustrates that an average of seven percent of daily traffic actually consists of hackers exploring and/or exploiting vulnerabilities. That figure is likely even higher for a “small fish” SMB that provides goods and services to a “big fish”– since these SMBs are often used as gateways into the more heavily defended large enterprises.
While DDoS attacks tend to receive some of the more frequent, large-scale press coverage, there are other website attacks that can wreak even more havoc on a small business. The nearly constant stream of application-layer bot attacks is much more common and harder to detect and defend against. “Bad” bots are masquerading as “good” bots such as Google and Bing crawlers – but are actually conducting competitive data mining, account hijacking, and much worse. They affect a business website’s availability, degrade the user experience, and vacuum up proprietary information all while under the radar – potentially eroding consumer trust in a brand.
Small businesses that are hacked often suffer losses of much greater magnitude than their larger counterparts because they lack the established “name recognition” of big companies. Hackers may use a site to host malware, to get around blacklisted IP addresses, which can gravely affect company’s marketing efforts by hurting their search engine rankings on Google, Bing and many others. If a company’s site is detected as compromised, search engines will devalue a domain until its able to rid it of malicious code.
Since mid-2010, attacks targeting small businesses have steadily increased to the point that they now account for about half of all attacks. Despite the high probability of facing a very real cyber-nightmare, the vast majority of small business owners have not made significant progress because they either lack the resources for sufficient defense or have not taken the threat seriously. According to the Small Business Administration cybersecurity portal, owners and staff with IT responsibilities must began to think about how to respond to a sudden loss of control or access to their website platforms. They should prioritize security assets “by conducting penetration tests and then shoring up defenses against the vulnerabilities that are discovered.”
SBA analysts recommend that owners utilize technology that is designed to solve the specific challenges that the business is facing in the cyber arena. “Small businesses should automate as much of their security as they possibly can. If after performing an inventory, customers employ data loss prevention technology to monitor if sensitive information is leaving the organization, they can automate scanning for these types of vulnerabilities,” the organization states.
Technology alone does not equal security, as owners and employees must begin to realize that their websites offer a potentially immense value proposition to hackers. An SMB is definitely not too small to care.
About the author: Avi Bartov is co-founder of GamaSec, a global provider of website security solutions for small and medium-sized businesses. A technology executive who led several companies to success in Europe and Israel, Avi has more than 20 years of experience in IT security management and is a graduate of Nanterre University with a degree in international law.Copyright 2010 Respective Author at Infosec Island
HBO, and many other companies, have been faced with that dreaded sick feeling of finding out that someone hacked their Twitter or Facebook accounts. In many cases, businesses are not treating their brand and good-will the same way they are treating other corporate assets like their HR or finance systems.
Most businesses force password changes and two-factor authentication on users of internal systems. More forward thinking companies have implemented privileged account management systems that allow users to check-out of passwords to high-value or high-risk systems and then randomize those passwords when they are checked back in. In some cases, a privileged account management system may even disable an account when it is not being used by someone making that account nearly hack-proof. However, companies seem to be slow with realizing that their Twitter, Facebook or LinkedIn accounts and passwords require exactly the same protection as any of their high-risk or high-value internal systems. Why is that? Why aren’t companies at least turning on two-factor authentication, at a minimum?
The story in question is a great example of a well-known company having damage done to their brand by a group of hackers. Unlike a financial system or an HR system the loss of brand reputation is incalculable but acknowledged to be very high. Notwithstanding the fact that that a brand is damaged every time an article is written about what happened to them. (I love HBO and I will continue my subscription nevertheless!)
Is it too inconvenient to have to check-out a password when you want to Tweet? Or update your company’s status on Facebook? Or use two-factor authentication? Do you have many social media employees who all need access to the same social media accounts at the same time so you’re sharing a password with many and two-factor authentication doesn’t work for shared accounts? Most modern privileged account management systems give you the capability of defining policies like “require check-out after hours”, “require check-out if outside the network”, or “wait for check-in before check-out” to ensure that only one person is posting at a time. It’s even possible to ensure that the social media employees never see the password that they are checking out! A combination of these types of policies could easily level-up your protection of your social media (privileged) accounts. A really good system would also ensure that any passwords used by employees that aren’t randomized are checked against a list of known, hacked, passwords that are in the dictionaries of most hackers. A great example of some of these well-known hacked passwords include: starwars, 123456 or qwerty.
It’s really time to start protecting your Facebook, LinkedIn, Twitter, Tumblr, Instagram and all other social media systems with as much security as your accounts payable or human resources system. There are no technical excuses.
About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.Copyright 2010 Respective Author at Infosec Island
How Predictive Intelligence Helps to Protect Companies
The malware and IT security panorama has undergone a major change, and enterprise security will never be the same. Hackers have improved drastically, both in terms of volume and sophistication, new techniques for penetrating defenses and hiding malware are allowing threats to remain on corporate networks for much longer periods than ever before.
Protecting an enterprise is a challenge because an enterprise can have hundreds of thousands of computers in its network; and a criminal just needs to compromise one of them to succeed. Security companies have been protecting, trying to protect computers for decades, implementing smart tactics (trying) to ensure there is never one computer infected.
In the beginning, it was easy, the number of threats were very low, so being able to identify all threats was enough, computers were safe. Some of those threats were complex, like polymorphic viruses. Some of them, like the metamorphic ones, were a nightmare for antivirus companies, as it could take several days, even weeks, for the expert researchers to create a detection for them. The creators of these viruses were people trying to show off their abilities, how good they were, and that was it, there was no other ulterior motive.
As the internet rose, there became a clear motive: money. Once cyber-criminals figured out how to benefit financially from these attacks, things really took off, and security companies, once again, had to adjust.
The reality today is the number of new threats created is growing exponentially. In the old days a virus could take weeks or months to travel from LA to NY, now with the internet, in a few seconds a virus could go from Washington DC to Tokyo.
Blacklisting is one tactic traditional anti-virus companies have used to fight cyber-crime. Blacklisting has decades of experience and included accurate signature detections, capable of effectively detecting hundreds of millions of malware samples. On the negative side, blacklisting comes with a lot of uncertainty. Their goal is to find malicious software, anything considered non-malicious is allowed, even though the security vendor and the customer have no idea what it is and what it is doing.
To make up for this uncertainty, we have also seen the tactic of whitelisting. Whitelisting can work well as it only allows goodware to be executed through the system. However, just like blacklisting, there are pitfalls with this method, including trojanized programs.
Both blacklisting and whitelisting worked well for a while, but in the age of advanced threats, they can no longer be counted on as the sole method. What happens when there is not malware involved in an attack? Neither of these models work, because at the end of the day they are two sides of the same coin, eliminating malware. Cyber-criminals can try and fail a million times, but as soon as they get it right once, they win. It’s not a level playing field, and our solutions need to evolve to get ahead.
In the Verizon Data Breach Investigation Report published this year, they cover security breaches and the different tactics used by the attackers.
Malware was only used in a 51% of the cases, half did not include malware at all! Which means that both approaches (blacklisting or whitelisting) do not work, and won’t protect your business.
So, where do we go from here? What does an advanced cyber-security solution need to look like for enterprises in 2018 and beyond?
Classification of all files using
- Blacklisting: As explained these technologies have their limitations, but at the same time they are excellent at doing what they must do (detecting malware)
- Whitelisting: With it, we can eliminate the uncertainty caused by blacklisting
- Automation The only scalable & viable way to classify all files is through automation, providing the best possible accuracy.
- Real Time Monitoring Goodware is already being used successfully by attackers, so we need to know everything that is happening in each computer in real time, including detecting malware-less attacks.
- Forensics It doesn’t matter how good we are, cyber-criminals will eventually compromise a computer, we cannot always outsmart them. That’s why having this last component is essential. As soon as a security breach is detected, with forensics we can answer all the questions that need to be answered: what has happened? When did it happen? How did the attackers enter? What have they done?
Including all of these components is an uphill battle for security vendors, but as an industry, we know what it takes to combat the most sophisticated cyber-attacks in history. Now, it’s a matter of execution, and businesses recognizing how important security is to their objectives.
About the Author: @Luis_Corrons has been working in the security industry for more than 17 years, specifically in the antivirus field. He is the Technical Director at PandaLabs, the malware research lab at Panda Security. Luis is a WildList reporter, and a top-rated industry speaker at events like Virus Bulletin, HackInTheBox, APWG, Security BSides, etc. Luis also serves as liaison between Panda Security and law enforcement agencies, and has helped in a number of cyber-criminal investigations.Copyright 2010 Respective Author at Infosec Island