InfoSec Island

Privilege Escalation Flaws Impact Wacom Update Helper

InfoSec Island - Fri, 05/17/2019 - 10:57am

Talos’ security researchers have discovered two security flaws in the Wacom update helper that could be exploited to elevate privileges on a vulnerable system.

The update helper tool is being installed alongside the macOS application for Wacom tablets. Designed for interaction with the tablet, the application can be managed by the user.

What the security researchers have discovered is that an attacker with local access could exploit these vulnerabilities to leverage their privileges to root.

Tracked as CVE-2019-5012 and featuring a CVSS score of 7.8, the first bug was found in the Wacom, driver version 6.3.32-3, update helper service in the startProcess command.

The command, Talos explains, takes a user-supplied script argument and executes it under root context. This could allow a user with local access to raise their privileges to root.

The second security flaw is tracked as CVE-2019-5013 and features a CVSS score of 7.1. It was found in the Wacom update helper service in the start/stopLaunchDProcess command.

“The command takes a user-supplied string argument and executes launchctl under root context. A user with local access can use this vulnerability to raise load arbitrary launchD agents,” Talos reveals.

Attackers looking to target these vulnerabilities would need local access to a vulnerable machine for successful exploitation.

According to the security researchers, Wacom driver on macOS, versions 6.3.32.2 and 6.3.32.3 are affected by these vulnerabilities.

Wacom has already released version 6.3.34, which addresses these bugs.

Related: Cisco Finds Serious Flaws in Sierra Wireless AirLink Devices

Related: Hard-Coded Credentials Found in Alpine Linux Docker Images

Related: Multiple Vulnerabilities Fixed in CUJO Smart Firewall

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Answering Tough Questions About Network Metadata and Zeek

InfoSec Island - Wed, 05/08/2019 - 3:53pm

We often receive questions about our decision to anchor network visibility to network metadata as well as how we choose and design the algorithmic models to further enrich it for data lakes and even security information and event management (SIEMs).

The story of Goldilocks and the Three Bears offers a pretty good analogy as she stumbles across a cabin in the woods in search of creature comforts that strike her as being just right.

As security operations teams search for the best threat data to analyze in their data lakes, network metadata often lands in the category of being just right.

Here’s what I mean: NetFlow offers incomplete data and was originally conceived to manage network performance. PCAPs are performance-intensive and expensive to store in a way that ensures fidelity in post-forensics investigations. The tradeoffs between NetFlow and PCAPs leaves security practitioners in an untenable state.

NetFlow: Too little

As the former Chief of Analysis for US-CERT has recommended: “Many organizations feed a steady stream of Layer 3 or Layer 4 data to their security teams. But what does this data, with its limited context, really tell us about modern attacks? Unfortunately, not much.”

That’s NetFlow.

Originally designed for network performance management and repurposed for security, NetFlow fails when used in forensics scenarios. What’s missing are attributes like port, application, and host context that are foundational to threat hunting and incident investigations.

What if you need to go deep into the connections themselves? How do you know if there are SMBv1 connection attempts, the main infection vector for WannaCry ransomware? You might know if a connection on Port 445 exists between hosts, but how do you see into the connection without protocol-level details?

You can’t. And that’s the problem with NetFlow.

PCAPs: Too much

Used in post-forensic investigations, PCAPs are handy for payload analysis and to reconstruct files to determine the scale and scope of an attack and identify malicious activity.

However, an analysis of full PCAPs in Security Intelligence explains how the simplest networks would require hundreds of terabytes, if not petabytes, of storage for PCAPs.

Because of that – not to mention the exorbitant cost – organizations that rely on PCAPs rarely store more than a week’s worth of data, which is useless when you have a large data like. A week’s worth of data is also insufficient when you consider that security operations teams don’t often know for weeks or months that they’ve been breached.

Add to that the huge performance degradation – I mean frustratingly slow – when conducting post-forensic investigations across large data sets. Why would anyone pay to store PCAPs in return for lackluster performance?

Network metadata: Just right

The collection and storage of network metadata strikes a balance that is just right for data lakes and SIEMs.

Zeek-formatted metadata gives you the proper balance between network telemetry and price/performance. You get rich, organized and easily searchable data with traffic attributes relevant to security detections and investigation use-cases (e.g. the connection ID attribute).

Metadata also enables security operations teams to craft queries that interrogate the data and lead to deeper investigations. From there, progressively targeted queries can be constructed as more and more attack context is extracted.

And it does so without the performance and big-data limitations common with PCAPs. Network metadata reduces storage requirements by over 99%, compared to PCAPs. And you can selectively store the right PCAPs, requiring them only after metadata-based forensics have pinpointed payload data that is relevant.

The perils of managing your own Bro/Zeek deployment

Another question customers often ask us is whether they should manage their own Bro/Zeek deployments. The answer is best explained through the experience of one of our government customers, which chose to deploy and manage it themselves.

At the time, the rationale was reasonable: Use in-house resources for a one-time, small-scale deployment, and incrementally maintain it with the rest of the infrastructure while providing significant value to their security team.

But over time, it became increasingly untenable:

  • It was difficult to keep it tuned. Each patch or newly released version required the administrator to recompile a binary and redeploy. 
  • It became difficult to scale. While partially an architectural decision, sensors can rarely scale by default – especially those that push much of the analytics and processing to the sensor. We don’t see many deployments that can even operate at 3 Gbps per sensor. Over time, the sensors began to drop packets. The customer had to suddenly architect clusters to support the required processing.
  • It was painfully difficult to manage legions of distributed sensors across multiple geographic locations, especially when sensor configurations were heterogeneous. When administrators who were familiar with the system left, a critical part of their security infrastructure was left unmanaged.

This no-win tradeoff drives many customers to ask us how their security teams can better spend their time. Should they manually administer tools (a.k.a. barely keeping afloat) in a self-managing fashion or focus on being security experts and threat hunters?

In addition to the deployment challenges for those who opt for the self-managed approach, day-to-day operational requirements like system monitoring, system logging and even front-end authentication pose a heavy burden.

Most make the choice to find a partner that can simplify the complexity of such a deployment: Accelerate time to deployment, enable automatic updates that eliminate the need to regularly patch and maintain, and perform continuous system monitoring.

These are default capabilities that free you to focus on the original charter of your security team.

About the author: Kevin Sheu leads product marketing at Vectra. During the past 15 years, he has held executive leadership roles in product marketing and management consulting experience, where he has demonstrated a passion for product innovations and how they are adopted by customers. Kevin previously led growth initiatives at Okta, FireEye and Barracuda Networks.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Qakbot Trojan Updates Persistence, Evasion Mechanism

InfoSec Island - Mon, 05/06/2019 - 1:11pm

The Qakbot banking Trojan has updated its persistence mechanism in recent attacks and also received changes that potentially allow it to evade detection, Talos’ security researchers say. 

Also known as Qbot and Quakbot, the Trojan has been around for nearly a decade, and has received a variety of changes over time to remain a persistent threat, although its functionality remained largely unaltered. 

Known for the targeting of businesses to steal login credentials and eventually drain their bank accounts, the malware has received updates to the scheduled task it uses to achieve persistence on the infected systems, which also allows it to evade detection. 

The Trojan typically uses a dropper to compromise a victim’s machine. During the infection process, a scheduled task is created on the victim machine to execute a JavaScript downloader that makes a request to one of several hijacked domains.

A spike in requests to these hijacked domains observed on April 2, 2019 (which follows DNS changes made to them on March 19) suggests that the threat actor has made updates to the persistence mechanism only recently, in preparation for a new campaign. 

The downloader requests the URI "/datacollectionservice[.]php3." from the hijacked domains, which are XOR encrypted at the beginning of the JavaScript. The response is also obfuscated, with the transmitted data saved as (randalpha)_1.zzz and (randalpha)_2.zzz and decrypted using a code contained in the JavaScript downloader.

At the same time, a scheduled task is created to execute a batch file. The code reassembles the Qakbot executable from the two .zzz files, using the type command, after which the two .zzz files are deleted. 

The changes in the infection chain make it more difficult for traditional anti-virus software to detect attacks, and the malware may easily be downloaded onto target machine, given that it is now obfuscated and saved in two separate files. 

“Detection that is focused on seeing the full transfer of the malicious executable would likely miss this updated version of Qakbot. Because of this update to persistence mechanisms, the transfer of the malicious Qbot binary will be obfuscated to the point that some security products could miss it,” Talos concludes. 

RelatedQakbot, Emotet Increasingly Targeting Business Users: Microsoft

RelatedQbot Infects Thousands in New Campaign

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Flaws in D-Link Cloud Camera Expose Video Streams

InfoSec Island - Mon, 05/06/2019 - 1:09pm

Vulnerabilities in the D-Link DCS-2132L cloud camera can be exploited by attackers to tap into video or audio streams, but could also potentially provide full access to the device. 

The main issue with the camera is the fact that no encryption is used when transmitting the video stream. Specifically, both the connection between the camera and the cloud and that between the cloud and the viewing application are unencrypted, thus potentially exposed to man-in-the-middle (MitM) attacks.

The viewer app and the camera communicate through a proxy server on port 2048, using a TCP tunnel based on a custom D-Link tunneling protocol, but only parts of the traffic are encrypted, ESET’s security researchers have discovered. 

In fact, sensitive details such as the requests for camera IP and MAC addresses, version information, video and audio streams, and extensive camera info are left exposed to attackers. The vulnerability resides in the request.c file, which handles HTTP requests to the camera. 

“All HTTP requests from 127.0.0.1 are elevated to the admin level, granting a potential attacker full access to the device,” ESET notes.

An attacker able to intercept the network traffic between the viewer app and the cloud or between the cloud and the camera can see the HTTP requests for the video and audio packets. This allows the attacker to reconstruct and replay the stream at any time, or obtain the current audio or video stream. 

ESET’s security researchers say they were able to obtain the streamed video content in two raw formats. 

Another major issue was found in the “mydlink services” web browser plug-in, which allows users to view video streams. The plug-in manages the creation of the TCP tunnel and the video playback, but is also responsible for forwarding requests for the video and audio data streams through a tunnel. 

The tunnel is available for the entire operating system, meaning that any application or user on the computer can access the camera’s web interface by a simple request (only during the live video streaming).

“No authorization is needed since the HTTP requests to the camera’s webserver are automatically elevated to admin level when accessing it from a localhost IP (viewer app’s localhost is tunneled to camera localhost),” the researchers explain. 

While D-Link has addressed issues with the plug-in, there are still a series of vulnerabilities in the custom D-Link tunneling protocol that provide an attacker with the possibility to replace the legitimate firmware on the device with a maliciously modified one. For that, they would need to replace the video stream GET request with a specific POST request to fetch a bogus firmware update.

The attack, ESET notes, is not trivial to perform and requires dividing the firmware file into blocks with specific headers and of a certain maximum length. However, because the authenticity of the firmware binary is not verified, an attacker could upload one containing cryptocurrency miners, backdoors, spying software, botnets or other Trojans, or they could deliberately “brick” the device.

Other issues the researchers discovered include the fact that D-Link DCS-2132L can set port forwarding to itself on a home router, via the Universal Plug and Play (UPnP) protocol. Thus, it exposes its HTTP interface on port 80 to the Internet without the user even knowing about it. The issue can be mitigated by disabling UPnP. 

“Why the camera uses such a hazardous setting is unclear. Currently close to 1,600 D-Link DCS-2132L cameras with exposed port 80 can be found via Shodan, most of them in the United States, Russia and Australia,” the researchers say. 

ESET says it reported the issues to D-Link in August 2018, including vulnerable unencrypted cloud communication, insufficient cloud message authentication and unencrypted LAN communication, but that only some of the flaws have been mitigated, such as the “mydlink services” plug-in, which is now properly secured. The most recent firmware available for the device is dated November 2016. 

“D-Link DCS-2132L camera is still available on the market. Current owners of the device are advised to check that port 80 isn’t exposed to the public internet and reconsider the use of remote access if the camera is monitoring highly sensitive areas of their household or company,” ESET concludes. 

Related: Critical Vulnerabilities Allow Takeover of D-Link Routers

Related: D-Link Patches Code Execution, XSS Flaws in Management Tool

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

SOAR: Doing More with Less

InfoSec Island - Fri, 04/26/2019 - 5:29am

Security orchestration, automation and response model has many benefits, including some that are unintended

Security teams in every industry and vertical are facing a common set of challenges. Namely, defending against an endless stream of cyberattacks, having too many security tools to manage, dealing with overwhelming workloads, and having a shortage of skilled security analysts. Most enterprises try to solve these challenges the old-fashioned way — by adding more tools and hoping they deliver on their promises.

Progressive enterprises are adopting a new approach, called Security Orchestration, Automation and Respons (SOAR) that focuses on making existing technologies work together to align and automate processes. SOAR also frees security teams to focus on mitigating active threats instead of wasting time investigating false positives, and performing routine tasks manually.

What is SOAR?

SOAR enables security operations centers (SOCs), computer security incident response teams (CSIRTs) and managed security service providers (MSSPs) to work faster and more efficiently.

Security Orchestration connects disparate security systems as well as complex workflows into a single entity, for enhanced visibility and to automate response actions. Orchestration can be accomplished between security tools via integration using APIs to coordinate data alert streams into workflows.

Automation, meanwhile, executes multiple processes or workflows without the need for human intervention. It can drastically reduce the time it takes to execute operational workflows, and enables the creation of repeatable processes and tasks.

Instead of performing repetitive, low level manual actions, security analysts can concentrate on investigating verified threats that require human analysis.

Some SOAR approaches even use machine learning to recommend actions based on the responses used in previous incidents.

Three elements make up a successful SOAR implementation:

Collaboration - is essential for creating efficient communication flows and knowledge transfer across security teams.

Incident Management  - ideally, a single platform will process all inputs from security tools providing decision-makers with full visibility into the incident management process.

Dashboards and Reporting - provide a comprehensive view of an enterprise’s security infrastructure as well as detailed information for any incident, event, or case.

Implementing SOAR

One of the primary benefits of SOAR is its flexibility. It can be used to unify operations across an enterprise’s entire security ecosystem, or as a vertical solution integrated within an existing product.

For example, one of the most popular product categories for this kind of vertical implementation is Security Information and Event Management (SIEM). Primarily because SOAR within a SIEM can have broad applicability across a wide range of processes. In contrast, when SOAR is implemented within other product areas, such as Threat Intelligence, it tends to have a more limited scope.

Initially, SOAR was designed for use by SOCs. However, as the approach matured and proved its benefits, other groups have adopted it including managed security services providers (MSSP) and computer security incident response teams (CSIRT). More recently, financial fraud and physical security team have also turned to SOAR.

Top Five SOAR Benefits

Arguably, the most powerful benefit of SOAR is its ability to integrate with just about any security process or tool already in use — and to enhance the performance and usefulness of each. Tight integration improves the efficiency of security teams to detect and remediate threats and attacks. It provides a single ‘pane of glass’ into asset databases, helpdesk systems, configuration management systems, and other IT management tools.

SOAR arms security teams with the ability and intelligence to react faster and more decisively to a threat or attack by unifying information from multiple tools and creating a single version of the truth.

Security teams waste an inordinate amount of time and energy dealing with false positives, since there are so many of them generated each day. SOAR automates the triage and assessment of low-level alerts, freeing staff to focus their attention where it is really needed.

Security staff spend way too much time on menial tasks such as updating firewall rules, adding new users to the network, and removing those who have left the company. SOAR virtually eliminates such time-consuming, repetitive functions.

Although cutting costs is rarely a driving factor for adopting SOAR, it often delivers this additional benefit by improving efficiencies and staff productivity.

Unifying and making existing security tools work together, rather than in silos, delivers greater visibility into threats. Implementing an SOAR model can provide the glue to make this security intelligence actionable using repeatable processes for faster incident response that does not require adding more resources.

About the Author: Michele Zambelli has more than 15 years of experience in security auditing, forensics investigations and incident response. He is CTO at DFLabs, where he is responsible for the long-term technology vision of its security orchestration, automation and response platform, managing R&D and coordinating worldwide teams.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Gaining Control of Security and Privacy to Protect IoT Data

InfoSec Island - Wed, 04/24/2019 - 6:50am

Internet traffic growth is unrelenting and will continue to expand exponentially, in large part, due to Internet of Things (IoT). The amount of data being generated is staggering, with 5 quintillion bytesof data produced and transmitted over the Internet, daily.  

Virtually every industry is going to be impacted by IoT. The vast amounts of data that devices, apps and sensors create, collect and consume are a real challenge for individuals and companies, throughout the world. This explosive growth of IoT-driven traffic is expanding the attack surface on our networks, putting business and user data at great risk.

Within our increasingly connected world, IoT gathers insightful data from almost every facet of our lives. These devices offer valuable intel about our social habits, public and personal transportation, healthcare facilities, manufacturing assembly processes, industrial plant operations, and even our personal health, sleep and exercise regimens.  

Can you imagine the consequences IoT device manufacturers and healthcare providers would face if sensitive patient health data was mishandled and exploited by hackers? Or if a design flaw in a modern car’s network access control system couldn’t be remotely patched, and hackers took over the vehicle's gas pedal, brakes and steering? If we don’t get a handle on the security issues for smart products, tomorrow’s news headlines will eliminate your need to imagine.

I remember a children’s song called “Dem Bones.” It went something like, “The toe bone's connected to the foot bone, the foot bone's connected to the ankle bone, the ankle bone's connected to the leg bone, now shake dem skeleton bones!”

Here’s a different take on that song. “The watch app is connected to the voice app, the voice app is connected to the car app, the car app is connected to the geofencing app - and that’s how the data gets around!”

While data access is great for helping us gain useful insights in all manner of life and business, it also poses a great threat, when in the hands of those who use it for ill-gotten gain.

IoT Data Should Be Private and Controlled

Data is being created, collected and consumed by IoT, everywhere. Yet, most consumers and companies don’t know if, or how, their data is being used. Many companies are monetizing our personal data, without our knowledge, and reaping billions of dollars. Yet, we continue to just give it away. Other companies are sharing this data within their ecosystem, to “enhance” the value of their products or services. Depending on the product or service, this information sharing can be of potential benefit for consumers, or a possible detriment.

So, what are we as individuals, and as a society to do? How do we discover who has access to our data, and how it is being used? Are we okay with this? After all, what we don’t know can’t hurt us, right?  Perhaps, we can start by becoming more aware and asking some of these questions:

What can we do to protect our data, and keep it confidential? How can we be assured that companies are acting responsibly with our data? Who is responsible for data protection? If we had a choice, how and what kind of information would we want shared with us? How can we gain greater transparency over how our data is used? Are we comfortable living with smart home devices that may listen in on private conversations we have at home?

The other day, my wife and I had a conversation at home about buying new shoes for our daughters. While having this conversation, we were surrounded by smart devices - Alexa, Ring, Nest and a multitude of smart phones and tablets. The next morning, I woke up and the first image in my Instagram feed was for toddler girl shoes. Is this a coincidence or targeted marketing? I haven’t figured out which device it was that captured our conversation, but I’m certain one of the smart devices is monetizing on the data it collects from private conversations going on in our home.  

That story provides a real-life example of how companies may be monetizing on data they collect from IoT devices. As devices proliferate into society, it’s important for consumers to be aware that data is being captured, and the importance of knowing how and when it is being captured. Manufacturers need to be more transparent about these practices so consumers have the right to opt-in or out of data collection on such a private and intrusive level.

You Can’t Have Trust Without Transparency

Many of the answers to the privacy questions mentioned above are not going to be solved with technology alone. We must gain greater insights and control into the way our data is used. Companies must self-regulate, and if they don’t, there should be regulatory and legislative actions required.

Many IoT manufacturers have direct control over their ecosystem, while others have more open systems and hub platforms that are more difficult to control, and specifically, to control how data is collected, stored, and ultimately used. Most companies fall short in communicating their data-handling policies to consumers.

We want these amazing devices in our homes, cars, offices, and bodies, but we don’t necessarily want the companies, or worse, hackers, misusing our information. It’s a catch 22. There are no easy answers or solutions, however as a society, we must feel the urgency to address this growing problem. Consumers need to be aware, while manufacturers need to be responsible. 

I think transparency is key to solving this problem. Companies must adopt a more transparent use of customer data, that will in turn, build customer trust. Transparency will help us know what data is being tracked, how it is being tracked, and how is it being monetized or shared. In the near future, we will have systems that provide data visibility to consumers. Perhaps a privacy portal with authentication mechanisms, where consumers can have autonomy, and even the ability to monetize their own data, by revenue sharing with companies.

Not only will this give consumers control over their data, it will also help companies build greater loyalty and brand equity, when they show consistent data stewardship.

Protecting IoT Data in Transit

In addition to a higher level of transparency, manufacturers need to protect the sensitive data collected. Data encryption is a best practice for confidentiality, and should be used by all IoT manufacturers when transmitting data.  Making sure all connections to an IoT device are properly authenticated, and that access controls are in place, helps keep bad actors out of the device’s ecosystem. If IoT is going to continue to grow in the future, we must have confidence in the security and privacy of our data. I believe all IoT devices that collect personal data, or sensitive business information, should always use encryption.

Controlling access to encryption keys is accomplished through authentication. User authentication uses username and password combinations, biometrics, tokens and other techniques. Server authentication uses certificates to identify trusted third-parties. Authentication allows a company to determine if a user or entity is who they say they are. It then verifies if, and how, they can access a system, including the ability to decipher encrypted data. Without question, multi-factor authentication is always the most secure form of protection for users.

While encryption and authentication protect data, they can’t prevent unauthorized access to a network. As a result, in addition to protecting access through authentication, authorization is used to control who sees sensitive data, and what they can do with it.

Authorization allows IT to restrict activity within their resources, applications and data, by giving specific access rights to individuals and groups. Privileges are defined, and the level of access is granted to individuals or groups.

Updating software on IoT devices isn’t always possible, and many devices don't have a secure method of ensuring the authenticity or integrity of software updates. This is a dangerous practice, as it enables hackers to introduce malware into devices. Code signing is an effective solution, that requires proof of the origin and integrity of executable software, by using a private signing key to create a digital signature of a hash of the file. Code signing is an effective way of protecting IoT device manufacturers, the businesses that deploy the devices, and the consumers of the devices, from the dangers posed by unauthorized software.

Consistency, security and trust have always been requirements for ensuring lasting customer relationships, and the digital age is no different. It’s a matter of who is in control of our data. Today, IoT device manufacturers and businesses are in control. In the future, we must be in control of our own information.

About the author: Mike Nelson is the VP of IoT Security at DigiCert, a global leader in digital security. He oversees the company’s strategic market development for the various critical infrastructure industries securing highly sensitive networks and Internet of Things (IoT) devices, including healthcare, transportation, industrial operations, and smart grid and smart city implementations.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Growing Reliance on Digital Connectivity Amplifies Existing Risks, Creates New Ones

InfoSec Island - Wed, 04/24/2019 - 6:03am

Information security threats are intensifying every day. Organizations risk becoming disoriented and losing their way in a maze of uncertainty, as they grapple with complex technology, data proliferation, increased regulation, and a debilitating skills shortage.

By 2021 the world will be heavily digitized. Technology will enable innovative digital business models and society will be critically dependent on technology to function. This new hyperconnected digital era will create an impression of stability, security and reliability. However, it will prove to be an illusion that is shattered by new vulnerabilities, relentless attacks and disruptive cyber threats.

The race to develop the next generation of super-intelligent machines will be in full swing and technology will be intertwined with everyday life. Coupled with heightened global mistrust and rising geopolitical tensions, this will lead to a cyber threat that is relentless, targeted and disruptive. The operating environment for business will become increasingly volatile.

At the Information Security Forum, we recently highlighted the top three threats to information security emerging over the next two years, as determined by our research. Let’s take a quick look at these threats and what they mean for your organization:

THREAT #1: DIGITAL CONNECTIVITY EXPOSES HIDDEN DANGERS

Vast webs of intelligent devices, combined with increased speeds, automation and digitization will create possibilities for businesses and consumers that were previously out of reach. The Internet of Things (IoT) will continue to develop at an astonishing rate, with sensors and cameras embedded into a range of devices across critical infrastructure. The resulting nexus of complex digital connectivity will prove to be a weakness as modern life becomes entirely dependent on connected technologies, amplifying existing dangers and creating new ones.

5G Technologies Broaden Attack Surfaces:The emergence of the fifth generation of mobile networks and technologies (5G) will provide a game-changing platform for businesses and consumers alike. Colossal speeds, minimal latency and a range of newly available radio frequencies will connect previously unconnected devices, accelerate processes and change entire operating models – but with these changes comes a broader attack surface, as millions of telecommunication masts are built with varying levels of security. As organizations become heavily reliant on 5G to operate, new attack vectors will exploit weaknesses in this emerging technology.

Manipulated Machine Learning Sows Confusion: Machine learning, and neural networks in particular, will underpin processes such as image recognition, pricing analysis and logistics planning. As businesses become reliant upon machine learning and humans are taken out of the knowledge loop, machine learning will become a prime target for attackers. Confusion, obfuscation, and deception will be used by attackers to manipulate these systems, either for financial gain or to cause as much damage and disruption as possible.

Parasitic Malware Feasts on Critical Infrastructure: Parasitic malware is a particular strain of malware designed to steal processing power, traditionally from computers and mobile devices. However, attackers will turn their attention to the vast interconnectivity and power consumption of Industrial Control Systems (ICS), IoT devices and other critical infrastructure, which offer an enticing environment for this malware to thrive. All organizations will be threatened as this form of malware sucks the life out of systems, degrading performance and potentially shutting down critical services.

THREAT #2: DIGITAL COLD WAR ENGULFS BUSINESS

By 2021 a digital cold war will unfold, causing significant damage to business. The race to develop strategically important, next generation technologies will provoke a period of intense nation state-backed espionage – intellectual property (IP) will be targeted as the battle for economic and military dominance rages on. Cloud services will become a prime target for sabotage by those seeking to cause disruption to society and business. Drones will become both the weapon and target of choice as attackers turn their attention skywards.

State-Backed Espionage Targets Next Gen Tech:A new wave of nation state-backed espionage will hit businesses as the battle for technological and economic supremacy intensifies. The target: the next generation of technology. History teaches us that at times of great technological change targeted industrial espionage follows. Organizations developing technologies such as Artificial Intelligence (AI), 5G, robotics and quantum computing, will find their IP systematically targeted by nation state-backed actors.

Sabotaged Cloud Services Freeze Operations:Popular cloud providers will have further consolidated their market share – organizations will be heavily, if not totally, dependent on cloud providers to operate. Attackers will aim to sabotage cloud services causing disruption to Critical National Infrastructure (CNI), crippling supply chains and compromising vast quantities of data. Organizations and supply chains that are reliant on cloud services will become collateral damage when cloud services go down for extended periods of time.

Drones Become Both Predator and Prey: Drones will become predators controlled by malicious actors to carry out more targeted attacks on business. Developments in drone technologies, combined with the relaxation of aviation regulations, will amplify attackers’ capabilities as the threat landscape takes to the skies. Conversely, drones used for commercial benefit will be preyed upon, hijacked and spoofed, with organizations facing disruption and loss of sensitive data.

THREAT #3: DIGITAL COMPETITORS RIP UP THE RULEBOOK

Competing in the digital marketplace will become increasingly difficult, as businesses develop new strategies which challenge existing regulatory frameworks and social norms, enabling threats to grow in speed and precision. Vulnerabilities in software and applications will be frequently disclosed online with ever-decreasing time to fix them. Organizations will struggle when one or more of the big tech giants are broken up, plunging those reliant on their products and services into disarray. Organizations will rush to undertake overly ambitious digital transformations in a bid to stay relevant, leaving them less resilient and more vulnerable than ever.

Digital Vigilantes Weaponize Vulnerability Disclosure: Ethical vulnerability disclosure will descend into digital vigilantism. Attackers will weaponize vulnerability disclosure to undercut organizations, destroy corporate reputations or even manipulate stock prices. Organizations will find their resources drained as digital vigilantes reduce the timelines to fix vulnerabilities and apply patches, seriously threatening operations, reputations and endangering customers.

Big Tech Break Up Fractures Business Models: Calls for the breakup of big technology giants will reach their peak by 2021. By then, at least one of them will be broken up, significantly disrupting the availability of the products and services they provide to dependent organizations. From email to search engines, advertising, logistics and delivery, the entire operating environment will change. Malicious actors will also prey upon vulnerable, transitioning organizations.

Rushed Digital Transformations Destroy Trust: The demand for organizations to remain relevant in a technology-centric world will drive them to undertake rushed digital transformations. Organizations will deploy technologies such as blockchain, Artificial Intelligence (AI) and robotics, expecting them to seamlessly integrate with ageing systems. Organizations will be faced with significant disruption to services, as well as compromised data when digital transformations go wrong.

Preparation Must Begin Now

Information security professionals are facing increasingly complex threats—some new, others familiar but evolving. Their primary challenge remains unchanged; to help their organizations navigate mazes of uncertainty where, at any moment, they could turn a corner and encounter information security threats that inflict severe business impact.

In the face of mounting global threats, organization must make methodical and extensive commitments to ensure that practical plans are in place to adapt to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.

The three themes listed above could impact businesses operating in cyberspace at break-neck speeds, particularly as the use of the Internet and connected devices spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large, even if they seem distant. The future arrives suddenly, especially when you aren’t prepared.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

How Microsegmentation Helps to Keep Your Network Security Watertight

InfoSec Island - Wed, 04/24/2019 - 5:57am

A submarine operates in hazardous conditions: in the ocean depths, even a small breach of its hull could spell disaster for the vessel and its crew. That’s why submarine designers don’t just rely on the strength of the outer skin for protection. The interior is segmented into multiple watertight compartments, with each capable of being closed off in the event of an emergency so that the rest of the boat can continue to function. 

The same logic has been applied to enterprise networks for several years now.  Segmentation has been a recommended strategy for shrinking enterprise attack surfaces, with a lack of it being cited as a contributing factor in some of the biggest-ever data breaches. A lack of segmentation also contributed to the $40M disruption experienced by manufacturer Norsk Hydro in March this year, when multiple IT and operational systems were hit by ransomware that moved laterally across its networks.

But while segmentation is recognized as an effective method for enhancing security, it can also add significant complexity and cost – especially in traditional on-premise networks and data centers. In these, creating internal zones usually means installing extra firewalls, cabling and so on to police the traffic flows between zones. This is complex to manage when done manually.

However, the move to virtualized data centers using software-defined networking (SDN) changes this. SDN’s flexibility enables more advanced, granular zoning, allowing networks to be divided into hundreds of microsegments, delivering a level of security that would be prohibitively expensive and complicated to implement in a traditional data center. As such, research by analyst ESG has shown that nearly 70% of enterprises are already using some form of micro-segmentation to limit hackers’ ability to move laterally on networks, and make it easier to protect applications and data.

Even though SDN makes segmentation far easier to achieve, implementing an effective micro-segmentation strategy presents security teams with two key challenges. First, where should the borders be placed between the microsegments in the network or data center for optimum protection against malware and hackers? Second, how should the teams devise and manage the security policies for each of the network segments, to ensure that legitimate business application traffic flows are not inadvertently blocked and broken by the micro-segmentation scheme?

A process of discovery

To start devising a micro-segmentation scheme for an existing network or datacenter, you need to discover and identify all the application flows within it. This can be done using a discovery engine which identifies and groups together those flows which have a logical connection to each other and are likely to support the same business application.

The information from the discovery engine can be augmented with additional data, such as labels for device names or application names that are relevant to the flows. When compiled, this creates a complete map identifying the flows, servers and security devices that your critical business applications rely on.

Using this map, you can start to draw up your segmentation scheme by deciding which servers and systems should go into each segment: A good way to do this is by identifying and grouping together servers that support the same business intent or applications. These will typically share similar data flows, and so can be placed in the same segment.

Once the scheme is outlined, you can then choose the best places on the network to place the security controls to enforce the borders between segments. To do this, you need to establish exactly what will happen to your business application flows when those filters are introduced.

Remember that when you place a physical or virtual filtering device to create a segment border, some application traffic flows will need to cross that border. These flows will need explicit policy rules to allow them, otherwise the flows will be blocked and the applications that rely on them will fail.

Crossing the borders

To find out if you need to add or change specific policy rules, examine the application flows that you identified in your initial discovery process – and make a careful note about any flows whichalready pass through an existing security control. If a given application flow does not currently pass through any security control, and you plan to create a new network segment, you need to know if the unfiltered flow might get blocked when that segment border is established. If it does get blocked, you will need to add an explicit new policy rule that allows the application flow to cross it.

Micro-segmentation management

Having devised and implemented your micro-segmentation scheme, you will need to manage and maintain it, and ensure it works in harmony with the security across your entire enterprise network. The most effective way to achieve this is with a network security automation solution that can holistically manage all the security controls in your SDN environment alongside your existing traditional on-premise firewalls.

Automation ensures that the security policies which underpin your segmentation strategy are consistently applied and managed across your entire network estate, together with centralized monitoring and audit reporting. Any changes that you want to make to the segmentation scheme can be assessed and risk-checked beforehand to ensure that applications will continue to work, and no connectivity is affected. Then, if the changes do not introduce any risk, they can be made automatically, with zero-touch, and automatically recorded for audit purposes. This streamlines the management process, and avoids the need for cumbersome, error-prone manual processes every time you need to make a network change.

To conclude, building and implementing a micro-segmentation strategy requires careful planning and orchestration to ensure it is effective. And automation is critical to success, as it eliminates time-consuming, complex and risky manual security processes. But when done right, micro-segmentation helps to ensure that your networks offer watertight security, and stops a small breach turning into a disaster that could sink your business.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island