InfoSec Island

Four Technologies that will Increase Cybersecurity Risk in 2019

InfoSec Island - Thu, 01/17/2019 - 10:21am

Attackers are not just getting smarter, they are also using the most advanced technologies available, the same ones being used by security professionals – namely, artificial intelligence (AI) and machine learning (ML).

Meanwhile, the widespread adoption of cloud, mobile and IoT technologies has created a sprawling IT attack surface that is getting harder to protect from cyber threats, since fixing every existing vulnerability in these infrastructures is unfeasible and impossible.

Here are four ways attackers will exploit technology in new and creative ways over the next 12 months.

AI Bias will Pose New Security Risks

The bias issue is in its infancy now, but will grow rapidly this year and beyond. We can expect attackers to exploit the vulnerabilities associated with it.

Since algorithms are being applied everywhere, bias will follow them. For example, for AI to function properly in cybersecurity, a continuous feed of quality data is required. Garbage in will produce garbage out, such as too many false positives and/or too many false negatives. Furthermore, AI gives probable, rather than definitive answers.

We expect AI bias to increase this year, since many users are not updating their base data and results are not being verified by security analysts. Under these circumstances, AI can have the reverse effect for cybersecurity. Instead of making organizations more secure, AI will generate unreliable insights, that if followed will increase, not decrease, risk.

Automation/Orchestration Tools will be Hijacked

Automation and orchestration tools allow developers and security professionals to achieve new levels of speed and efficiency using unattended processes performed in software. These frameworks, if compromised by attackers, can be co-opted for malicious purposes.

For example, Kubernetes, the world’s most popular cloud container orchestration system, experienced its first major security vulnerability recently.

The bug, CVE-2018-1002105, aka the Kubernetes privilege escalation flaw, allows specially crafted requests to establish a connection through the Kubernetes API server to backend systems, then send arbitrary requests over the same connection directly to these machines.

Exploiting just one Kubernetes vulnerability would enable an attacker to take down containers across the globe. While the Kubernetes bug has been fixed, the writing on the wall is clear: more automation and orchestration tools will be targeted in the next 12 months.

Robotic Process Automation Will be Targeted

Robotic process automation (RPA) is being used to control a wide range of operational technologies in manufacturing and many other critical infrastructure sectors.

From a security standpoint, RPA creates a dangerous new attack surface that has multiple layers, including a robust web layer, an API layer, a data exchange layer, and so on. Plus, RPA systems lack robust defence mechanisms.

While we did not seen many RPA vulnerabilities disclosed in 2018 – this is likely to change this year as RPA solutions go increasingly mainstream. If exploited, these vulnerabilities could compromise an entire industrial plant or even several facilities at once.

API Attacks Will Increase

Many companies fail to protect their APIs with the same level of security they devote to networks and business-critical applications. As a result, APIs have far reaching security implications for security teams. Case in point: Google, which announced it would be shutting down Google+ because of an API compromise.

The Google experience is just the tip of the iceberg, as companies rarely disclose API attacks. More high-profile API attacks are inevitable as they provide hackers with the keys to the kingdom, giving them numerous avenues for access into corporate data, processes, and operations.

Furthermore, APIs generally contain clear, well-documented details on the inner workings of applications – information that provides hackers with valuable clues on attack vectors they can exploit.

While advances in technology provide many benefits when it comes to digital transformation, they also open new threat vectors and the potential for attacks that can spread quickly over connected ecosystems. Maintaining visibility into traditional and emerging (Orchestration, RPA, API, etc.) IT infrastructures and vulnerabilities associated with them will play a central role in reducing security incidents this year. Identifying and fixing those that pose the highest risk will be even more important.

About the author: Dr. Srinivas Mukkamala, co-founder and CEO of RiskSense, is a recognized expert on artificial intelligence (AI) and neural networks. He holds a patent on Intelligent Agents for Distributed Intrusion Detection System and Method of Practicing.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Strategies for Winning the Application Security Vulnerability Arms Race

InfoSec Island - Thu, 01/17/2019 - 10:10am

As cyber criminals continuously launch more sophisticated attacks, security teams increasingly struggle to keep up with the constant stream of security threats they must investigate and prioritize. When observing companies that have a large web presence (e.g., retail/e-commerce companies), consider the broad threat landscape at play. Web application attacks were responsible for 38 percent of the data breaches examined in the 2018 Verizon Data Breach Investigations Report (DBIR).

To win the vulnerability arms race, security teams need to fight fire with fire by partnering with their own application development teams and enabling them to identify and fix security vulnerabilities in their code earlier in the development process. In doing so, organizations can resolve critical security vulnerabilities before applications move into production, greatly minimizing their risk for costly data breaches.

Catching and resolving vulnerabilities earlier in the software development life cycle (SDLC) makes life a lot easier for security teams further downstream. Shifting left enables security teams to avoid tedious and unnecessary review, greatly reducing their workload and allowing them to focus on the most important security threats to their organizations.

Benefits of shifting left in software development

Various studies from the past decade support the assertion that fixing a software vulnerability earlier in the SDLC is faster, is much less expensive to the organization, and requires fewer resources than fixing a vulnerability in an application that has been released to production.

A 2008 white paper issued by IBM states that “the costs of discovering defects after release are significant: up to 30 times more than if you catch them in the design and architecture phase.” While the white paper was issued a decade ago, this statement is just as significant today.

Obviously, the preferred approach is for developers to resolve security vulnerabilities while they’re coding rather than letting the same fatal issues propagate in countless other places in an application—and then having to return to the lengthy development phases of testing, quality assurance, and final production. Implementing a solution that aligns with development processes allows developers to nip vulnerabilities in the bud as they code, and creates a positive habit that is quick and painless.

Potential software security obstacles

If the solution is so obvious, then why aren’t more development organizations doing it?

  1. Development teams often lack security expertise, and security teams often lack development expertise. Consequently, these teams may feel as if they’re working at cross purposes.
  2. For many years, developers’ highest priority has been to get working software out the door as quickly as possible. If they perceive security testing tools as a roadblock in CI/CD workflows and stringent development schedules, they’ll refuse to adopt them, or they’ll find ways to work around them.

To address these issues, organizations should select the right security tools. These tools should provide developers with technical guidance and educational, contextual support to fix any security vulnerabilities flagged in their code immediately. These tools need to be fast and accurate, fit seamlessly into development workflows, and support developers in producing secure code while also enabling them to hit their release schedules.

What to look for in a static application security testing (SAST) tool

  1. Accuracy. Comprehensive code coverage; accurate identification and prioritization of critical security vulnerabilities to be fixed.
  2. Ease of use. An intuitive, consistent, modern interface involving zero configuration; insight into vulnerabilities with necessary contextual information (e.g., dataflow, CWE vulnerability description, and detailed remediation advice).
  3. Speed. Fast incremental analysis results that appear in seconds as developers write code.
  4. DevSecOps capabilities. Support for popular build servers and issue trackers; flexible APIs for integration into custom tools.
  5. Scalability. Enterprise capabilities to support thousands of projects, developers, and over 10 million issues.
  6. Management-level reporting and compliance to industry standards. Security standards coverage including OWASP Top 10, PCI DSS, and SANS/CWE Top 25; embedded technologies compliance standards coverage including MISRA, AUTOSAR, CERT C / C++, ISO 26262, and ISO/IEC TS 17961.
  7. eLearning integration. Contextual guidance and links to short courses specific to the issues identified in code; just-in-time learning when developers need it.

Security and development teams need to collaborate closely to ensure that enterprise web and mobile applications are free of vulnerabilities that can lead to costly data breaches. Choosing the right development security tool is the first step toward achieving this critical goal.

About the author: Anna Chiang is the Sr. Manager, SAST Product Marketing at Synopsys. She has also held lead roles in application security and platform product management at WhiteHat Security, Perforce Software, and BlackBerry.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Taking Advantage of Network Segmentation in 2019

InfoSec Island - Wed, 01/16/2019 - 7:52am

Overview

Security is and will always be top of mind within organizations as they plan out the year ahead. One method of defense that always deserves attention is network segmentation.

In the event of a cyberattack, segmented networks will confine the attack to a specific zone – and by doing so, contain its impact by preventing attackers from exploiting their initial access to move deeper into the network. By segmenting your network and applying strong access controls for each zone, you isolate any attack and prevent damage before it can start.

But today, in 2019, enterprise networks are no longer just networks. They are a patchwork of traditional networks, software-defined-networks, cloud services and microservices. Segmenting this hybrid conglomerate requires an updated approach.

In this article, I will review exactly how your organization can get started with network segmentation – including some potential issues to plan for and successfully avoid.

Getting Started

Organizations seeking a starting point typically find that designating rudimentary network zones is the most successful approach. Start with simple zones to begin segmentation, even for today’s more complex hybrid networks. Initial zones typically include Internal, External, Internet, and DMZ. Further refinements to segments and access policies can be made to these initial zones, ultimately resulting in an acceptable policy.

Solidify Your Security Policy

Organizations need a standard policy for allowable ports and protocols between zones. This needs to be fully documented and in a format that can be quickly accessed and reviewed – not something unofficial that’s stuck in the head of your IT security manager. Once security policy is transcribed and centralized in a single place, each modification to access policies can be made consistently and confidently.

As your segmentation initiative continues, take the opportunity to consider segmenting the network even further using user identity and application controls.

Restrict Egress

Most likely, your network is already infected and even if it isn’t, it’s a good practice to assume so. Completely preventing malware in your network is practically impossible. But luckily there's another mitigation that is easier. If you limit egress (outbound) connectivity to only what is needed, you can prevent malware from calling back home and from uploading stolen data.

Traditional Networks, the Cloud, and Beyond

As your network expands into the cloud you may need to partner with application teams. If your organization is moving to the cloud in a “lift and shift” approach, your cloud networks are probably considered an extension of your on-premise network and the traditional segmentation architecture will be applied. Organizations that are already doing “cloud native” operations (characterized by DevOps and CI/CD processes) take a different approach. They architect their cloud around applications rather than networks. In this case, the cloud will be owned by the relevant application team and, in the event of a security incident, you will need to work closely with them to contain the breach.

This exercise often results in two or more teams becoming united around a unified goal – while also providing an opportunity to improve the network’s connectivity and security. When you identify parts of the network you are responsible for segmenting, you will also want to include any anticipated networks that may be adopted or merged with your existing network. Understanding that these changes will introduce new security concerns and obstacles, you can better plan for the overall impact on your network segmentation strategy.

Make It Manageable

Parallel to ongoing changes in networking platforms are changes to network policy. This year, your organization may undertake a cloud-first initiative, launch new businesses, or open new locations across the globe – and you need to be ready for it.

Regardless of what changes in your network, there needs to be consistency over managing and tracking these changes. Identifying tools that have previously been configured on your network or are currently used to manage your network makes managing your segmentation process achievable from the start. If you identify multiple existing solutions, consider whether you can integrate them to create a central console for designing, implementing and managing ongoing segmentation.

Implement In Phases

Once you’ve reached this point in the segmentation process, you should know which resources are available and who can contribute to segmentation design and implementation. Armed with this knowledge, you should now set priorities.

First and foremost, assess the network at a high-level and consider what zones you will want to appoint (regardless of how rudimentary it seems). Even segmenting the network in half will provide you with greater management over connectivity, increase visibility over access and identify risks associated with existing access rules that you may have previously missed.

Starting with the initial four zones from above, you can identify further connectivity restrictions and prioritize which of the zones needs to be further segmented (e.g., sensitive data, cyber assets, etc.).

If you are required to comply with specific industry regulations, start by designating a sensitive data zone (for example, a zone for PCI DSS systems and data). You can begin with compliance in mind and then take a step back to approach the broader network. This approach will help you identify connectivity improvements, reduce access permissions, and complicate potential attacks on sensitive data.

Microsegmentation

When you start applying security, you apply it at the macro-level: zones, subnets, vlans, etc. As you make progress in your security journey, you can consider the micro as well. Modern architectures such as SDN, cloud platforms and Kubernetes microservices provide flexible methods to develop individual access control for applications in a security-first mentality (which is commonly referred to as micro-segmentation). Restricting connectivity per application gives you greater control with more specific whitelists and easier identification of abnormal behavior. To do this effectively you will need to involve application owners.

Don’t Do Too Much, Too Soon

Remember to segment in a gradual, deliberate progression as implementing such detailed control in your network can become unmanageable due to the sheer volume of segments.

When implementing network segmentation, it very important to avoid over-segmenting, as this may result in a loss of manageability due to increased complexity.  While segmentation improves overall compliance and security, organizations should segment incrementally in order to maintain manageability and avoid overcomplicating the network.  

Taking on too much, too soon – or moving to micro- or nano- segmentation from the beginning – can create an “analysis paralysis” situation, where the team is quickly overwhelmed and the promise of network segmentation is lost.

An example of doing too much could be assigning a security zone for every application in a 500-application environment. If every zone is individually customized, the sheer volume of network rules can overwhelm an organization, leaving your organization less secure.

Take Action

As anyone who’s worked a day in security knows, automated solutions can send an overwhelming number of alerts. Network Security Policy Management (NSPM) solutions can alleviate this alert fatigue. Violations to security policy can be reviewed to determine if they are allowable exceptions to the policy or require changes in order to comply. Beyond compliance with policy, NSPM solutions can assess whether access violations followed approved exception procedures or not.

Security professionals are often targeted by attackers. NSPM solutions can also be used to determine if network violations are the result of attackers using compromised credentials to grant themselves access to sensitive data.

Never Rest

The proper approach to network segmentation is to never say “completed.” Every network is subject to change – and so are the access controls governing connectivity. A continuous stepwise approach is best. Using NSPM, with each step, you have the opportunity to review, revise, and continue the momentum towards optimal segmentation. 

About the author: Reuven Harrison is CTO and Co-Founder of Tufin. He led all development efforts during the company’s initial fast-paced growth period, and is focused on Tufin’s product leadership. Reuven is responsible for the company’s future vision, product innovation and market strategy. Under Reuven’s leadership, Tufin’s products have received numerous technology awards and wide industry recognition.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

2019 Predictions: What Will Be This Year’s Big Trends in Tech?

InfoSec Island - Wed, 01/16/2019 - 7:41am

2018 was a year that saw major developments in machine learning, artificial intelligence and the Internet of Things, along with some of the largest scale data breaches to date. What will 2019 bring, and how can businesses prepare themselves for the technological developments to come over the next twelve months?

Data security 

More than any other year, 2018 has cemented the importance of keeping data secure online, with many high-profile breaches such as British Airways, Marriott Hotels and Ticketmaster hitting the headlines. 

But despite these breaches, the worrying trend of insecurely storing massive amounts of personal data online is showing no signs of slowing down. As such, cyber criminals will continue to look for ways to exploit data in any way they can: committing fraud, theft or using it to craft highly targeted attacks. With people beginning to grow conscious of the digital footprint they are leaving behind, it’s likely that we’ll start to see data security being taken far more seriously by businesses over the next year.

This will have a significant impact on larger organisations, who will need to look beyond securing their perimeters and ensure that their networks are robust, especially as these become a more popular attack vector amongst cyber criminals due to the increasing amount of network vulnerabilities. 

Organisations will need to bear in mind that implementing firewalls and securing endpoints are no longer enough to stay secure against evolving threats, and that they will instead need to focus on ensuring they have a robust and thorough security strategy that can handle the complexities of the modern cyber security landscape.

Ethics & legislation

Data, and the legislation that governs it, will continue to be an ongoing source of discussion – and one that could ultimately affect how we interact with technology in the years to come.

With the landmark General Data Protection Regulation (GDPR) coming into force last year, we should soon see the impact that this will have on organisations, who will be challenged on their data processing and protection procedures. The next twelve months will likely see the first organisations face the penalties that go along with falling foul of GDPR, and organisations would be wise to investigate their own data handling now before it becomes too late.

With many businesses continuing to embrace cutting-edge technologies such as artificial intelligence and machine learning, we will also start to explore the implications of these when it comes to data protection, with updates to existing legislation likely to be required to keep up with the rapid pace of development across businesses.

IoT and AI 

Despite the many advancements made in the world of artificial intelligence over the last year, the technology is still very much in its infancy. With many still sceptical of the scope and usefulness of AI, 2019 will serve to be an important year for exploring its possibilities, along with investigating how it could impact our daily and working lives, particularly when it comes to data handling, cloud technologies and smart home devices.

To sustain the growth of AI, the technology will need to work efficiently alongside IoT devices in order to rapidly process growing swathes of data. The interaction between the two will serve as an interesting case study for their utility over the coming year, and should see major developments and acceptance in the wider working world.

Data will continue to be a crucial consideration for businesses over the coming year, intertwining with many technological advancements, and proving to be a source of ethical and legislative interest. Although it’s impossible to predict exactly what might happen over the next twelve months, we can be sure that data will be the focus of many discussions when it comes to security and emerging technologies in 2019.

About the author: With over 25 years’ business and technical experience in providing IT solutions, Matt’s expertise covers the design, implementation, support and management of complex communications networks.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Why Zero Tolerance Is the Future for Phishing

InfoSec Island - Wed, 01/09/2019 - 12:20pm

Our Testing Data Shows You’re Letting Me Hack You Every Time

Phishing just doesn’t get the love it deserves in the security community. It doesn’t get the headlines, security staff time, or dedicated attention that other, more flashy threat vectors get. Certainly, high-impact malware variants that sweep the globe, get their own cool logos and catchy names command respect. But at the end of the day, phishing attacks are really the ones that bring most organizations to their knees and are at the very start of some of the most devastating cyberattacks.

From my experience as a penetration tester and social engineer, it seems that most customers view phishing campaigns as a requirement to deal with once a year, with some high-performing companies tossing in additional computer-based training. In most instances, this type of testing is just one mandatory component of an annual compliance test like FedRAMP, which means, in effect, that the enterprise hasn’t tested their phishing defenses since the last time an audit was performed. Yet the numbers tell an alarming story: phishing has been shown to be the first step in over 90% of recorded breaches. It is a formidable threat to every organization and typically not addressed adequately in cybersecurity strategies.

As security professionals, we are commonly asked “what is an acceptable failure rate for phishing?” (FedRAMP and other certifications address acceptable failure rates as well.) For years, the prevailing sentiment and some professional guidance has been that anything under 10% would be trending in the right direction. While this guidance is, in my view, misguided, many industry professionals and consultancies have given out the same improper (or perhaps we should say “very outdated”) guidance, however well intentioned.

We have gathered three years of phishing test data from multiple phishing campaigns launched at some of the top Fortune 500 companies all the way down to sole proprietorships. From the data, one metric stands above all the others: 62.5% compromise rate. We have tested over 100 companies that have, in their opinion, “stellar phishing programs,” those that have a single campaign once a year, and those that do relatively nothing from year to year. While the quality of phishing testing programs has a broad range, the fact of the matter is, if a person clicks on a phishing email link (and 26.2% do, on average, in our data), there is a 62.5% percent chance on average that person is either going to download a payload that will give the malicious actor control of the host, or that person will share working credentials to their account. While there are security measures that can help to a degree, the metrics are clear—even if the threat actor doesn’t compromise your host, over half the time an active username and password is now in the hands of a malicious actor.

These results should be a significant wake-up call for every organization. Using the “old” acceptable rate of a 10% click through, that leaves a 6% compromise rate. Let’s look at what that might look like for a large enterprise with, say, 50,000 employees. A 26.2% click rate equals 13,100 clicks. If this company were to fall into the “average” compromise rate, that would be 8,187 compromises! Even the industry-standard 10% click rate would yield 3,125 compromises. 

I believe that companies should be striving for zero clicks. While this may well be unattainable, we as humans tend to be complacent in coming close to our goals. A goal of 10% will likely mean 12%. A goal of 2% will likely achieve a result of 5%, and with a 62.5% compromise rate, will still likely open the enterprise network to an unacceptable level of risk. Granting not only the important role phishing plays as an entryway to significant breaches but the likelihood of compromise per click, the industry should be shouting “Zero Tolerance” for all to hear. The days of acceptable risk should be over.

We are unlikely to eliminate the human element and the risks that brings. There will always be mistakes or issues as long as humans are involved. But by setting far more aggressive goals and standing up progressively better phishing testing programs to train employees, reward them for improvement, incentivize them for doing the right thing, and demonstrate what “good” looks like, enterprises can both set and meet more aggressive targets to better protect the organization.

While phishing isn’t the most interesting, headline-worthy topic in cyber news today, it should be a top concern when relating to cybersecurity in nearly every company. The cultural norm needs to shift to zero tolerance, and until it does, as a social engineer and fake criminal by day, I would like to thank you. Every single phishing campaign I run is going to provide me access to your system. You are making access to your company so very easy.

About the author: Gary De Mercurio is senior consultant for the Labs group at Coalfire, a provider of cybersecurity advisory and assessment services.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Universities Beware! The Biggest Security Threats Come from Within the Network

InfoSec Island - Tue, 01/08/2019 - 8:09am

Higher Education networks have become incredibly complex. Long gone are the days where students connected desktop computers to ethernet cables in their dorm rooms for internet. Now students can access the school’s wireless network at anytime from anywhere and often bring four or more devices with them on campus. Expecting to use their smartphones and gaming consoles for both school related and personal matters, they rely on constant internet connectivity.

While the latest technology streamlines processes and makes the learning experience more efficient, higher education institutions’ networks have not kept up with technology and cyber security requirements. Network security threats have become more common, and according to a recent Infoblox study, 81 percent of IT professionals state securing campus networks has become more challenging in the last two years.

Nevertheless, outside threats aren’t posing the biggest challenges, internal threats are.

More devices, more malware

IT administrators at universities have seen a surge in the number of devices connected to company networks, making the network more vulnerable to cyberattacks. Innovation in personal technology has played a large role in this. Students are bringing a surplus of devices with them to school beyond laptops like smartphones and tablets. For example, an Infoblox study found that students now use tablets (61%), smartwatches (27%) and gaming consoles (25%) on campus.

This spike in devices directly impacts universities network activity. Where a few years ago IT administrators only had to worry about managing the school’s devices, and potentially student and faculty laptops, that is no longer the case. The survey found that 60 percent of faculty, students and IT professionals use four or more devices on the campus network. This has made managing network activity incredibly complex, and has increased the risk of cyberthreat. Devices that are not native to the university network often do not maintain the same security standards that IT administrators are accustomed to.

Outdated security best practices

IT improvements have not been able to keep pace with the rate at which network activity is changing, making their networks an easy target for hackers and DDoS attacks. When devices using the university network are not properly secured, hackers can take advantage of this by breaking into the device, accessing the network and wreaking havoc that can cost universities millions of dollars.

For example, Infoblox’s survey found that 60 percent of faculty haven’t made network security changes in two years. In addition to not making updates to security best practices, 57 percent use outdated security measures like only updating passwords as a security precaution. Poor security practices also make it easier for hackers to compromise network infrastructure and access sensitive information.

A complex cybersecurity strategy that involves network protection can help to combat these types of attacks, but only 52 percent of current network management solutions have DNS provisioning capabilities and can provide remote network access control. This technique plays a critical role in identifying unusual activity on the network.

Lack of security awareness

Additionally, college students and faculty alike are not up to speed with the latest cybersecurity best practices and often make poor decisions that ultimately compromise network security. Thirty nine percent of IT administrators say users aren’t educated enough on security risks, which makes managing the network more challenging. Students are also unaware of the risks IoT devices can pose to the overall health of the network and don’t have the security knowledge to understand the nuances. For example, 54 percent of IT administrators say at least 25 percent of student’s devices come onto campus already infected with malware.

College students are known to be reckless when it comes to partying, but it appears this mindsight has also influenced their approach to cybersecurity. Infoblox’s survey also found that one in three college students have heard of other students implementing malware or making malicious attacks on the school’s network. Students clearly have little regard for how their network usage can impact the network, making the job of the IT administrator extremely difficult.

Conclusion

For better network security at higher education institutions, change needs to begin from within. The IT department needs to implement a next level network security strategy that can thwart the ongoing threat of DDoS attacks. Students and faculty need to be educated on security best practices when using devices on the university network. In the age of the Internet of Things, the number of internet connected devices connecting to the campus network will only increase, and the network needs to be fortified to support this influx from both a performance and security standpoint.

About the author: Victor Danevich is the CTO of Infoblox where he helps customers achieve Next Level Networking via hyper-scalability, implementing automation, and improving network availability with solutions that are built with security from the core.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

IAST Technology Is Revolutionizing Sensitive Data Security

InfoSec Island - Tue, 01/08/2019 - 7:55am

Unauthorized access to sensitive data, also known as sensitive data leakage, is a pervasive problem affecting even those brands that are widely recognized as having some of the world’s most mature software security initiatives, including Instagram and Amazon. Sensitive data can include financial data such as bank account information, personally identifiable information (PII), and protected health information (i.e., information that can be linked to a specific individual relating to their health status, provision, or payment).

If an organization suffers a sensitive data breach, they’re expected to notify authorities to disclose the breach. For example, per GDPR, the breached firm is expected to disclose the breach within 72 hours of discovery. Such an incident can result in damage to the brand, marred customer trust leading to lost business, regulatory penalties, and the organization funding the investigation into how the leak happened. Data breaches may even lead to lawsuits. As you can see, such an incident could be incredibly detrimental to the future of an organization. There are a variety of regulations in place globally that emphasize the importance of protecting data that is sensitive in nature. So, why then are we still seeing this issue persist?

While we tend to only hear about the massive brands suffering a breach in the news, it’s not only these giant enterprises that are at risk. In fact, small- and medium-sized firms are equally, if not more, susceptible to sensitive data leakage concerns. While the payoff for an attacker isn’t as grand, smaller companies are less likely to have strategies in place to detect, prevent, and mitigate vulnerabilities leading to a breach.

To avoid a sensitive data leak leading to a breach, firms of all sizes need to pay attention to cyber security. Firms often build their own applications, and almost always rely on pre-existing applications to run their business.

If you build your own applications, test them extensively for security. With interactive application security testing (IAST), you can perform application security testing during functional testing. You don’t really need to hire experts to perform vulnerability assessment when you have IAST.

IAST solutions help organizations identify and manage security risks associated with vulnerabilities discovered in running web applications using dynamic testing (a.k.a., runtime testing) techniques. IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. 

Considering that 84 percent of all cyber-attacks are happening in the application layer, take an inventory of the data relating to your organization’s applications. Develop policies around it. For instance, consider how long to keep the data, the type of data you’re storing, and who requires access to that data. Don’t store data you don’t need as a business. Keep information only as long as necessary. Enforce data retention policies. Put authorization and access controls in place around that data. Protect passwords properly. And, train employees on how to handle this data.

While it’s simple to recommend that firms should be taking an inventory of the data being processed by organizational applications, that is, in reality, a massive undertaking. Where do you even start?

IAST not only identifies vulnerabilities, but verifies them as well. When working with traditional application security testing tools, high false positive rate is a common problem. IAST technology’s verification engine helps firms understand which vulnerabilities to resolve first and minimizes false positives.

Sensitive data in web applications can be monitored through IAST, thus providing a solution to the data leakage problem. IAST monitors web application behavior, including code, memory, and data flow to determine where sensitive data is going, whether it’s being written in a file in an unprotected manner, if it’s exposed in the URL, and if proper encryption is being used. If sensitive data isn’t being handled properly, the IAST tool will flag the instance. There is no manual searching for sensitive data. IAST tooling intelligence detects it on behalf of the application owner—who can also alter the rules to fine tune their goals.

It’s also important to note that applications are built from many components: third party components, proprietary code, and open source components. Think of it like Legos. Any one (or more) of the pieces could be vulnerable. This is why, when testing your applications, it’s critical to fully test all three of these areas.

And we can’t forget implications relating to increasing cloud popularity. With the growing adoption of cloud, more and more sensitive data is being stored out of network perimeters. This increases the risk as well as the attack surface. Also increasing are the regulatory pressures and the need to deliver more with fewer resources in the shortest time possible. Under these circumstances, IAST is the most optimal way to test for application security, sensitive data leakage, and prevent breaches.

About the author: Asma Zubair is the Sr. Manager, IAST Product Management at Synopsys. As a seasoned security product management leader, she has also lead teams at WhiteHat Security, The Find (Facebook) and Yahoo!

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

New Year’s Resolution for 2019: Cybersecurity Must Be the Top Priority for the Board

InfoSec Island - Fri, 01/04/2019 - 7:02am

In the year ahead, organizations must prepare for the unknown so they have the flexibility to endure unexpected and high impact security events. To take advantage of emerging trends in both technology and cyberspace, businesses need to manage risks in ways beyond those traditionally handled by the information security function, since innovative attacks will most certainly impact both business reputation and shareholder value.

It is recommended that businesses focus on the following security topics in 2019:

  • The Increased Sophistication of Cybercrime and Ransomware
  • The Impact of Legislation
  • Smart Devices Challenge Data Integrity
  • The Myth of Supply Chain Assurance 

The Increased Sophistication of Cybercrime and Ransomware

Criminal organizations will continue their ongoing development and become increasingly more sophisticated. Some organizations will have roots in existing criminal structures, while others will emerge focused purely on cybercrime. Organizations will also struggle to keep pace with this increased sophistication and the impact will extend worldwide, with malware in general and ransomware in particular becoming the leading means of attack. While overall damages arising from ransomware attacks are difficult to calculate, some estimates suggest that there was a global loss in excess of $5 billion in 2017. On the whole, the volume of new mobile malware families grew significantly throughout 2017, in particular mobile ransomware. This should be expected to continue in 2019. Email-based attacks such as spam and phishing (including targeted spear phishing) are most commonly used to obtain an initial foothold on a victim’s device. Cyber criminals behind ransomware will shift their attention to smart and personal devices as a means of spreading targeted malware attacks.

The Impact of Legislation

National and regional legislators and regulators that are already trying to keep pace with existing developments will fall even further behind the needs of a world eagerly grasping revolutionary technologies.  At present, organizations have insufficient knowledge and resources to keep abreast of current and pending legislation. Additionally, legislation by its nature is government and regulator driven, resulting in a move towards national regulation at a time when cross border collaboration is needed. Organizations will struggle to keep abreast of such developments which may also impact business models which many have taken for granted.  This will be of particular challenge to cloud implementations where understanding the location of cloud data has been an oversight.

Smart Devices Challenge Data Integrity

Organizations will adopt smart devices with enthusiasm, not realizing that these devices are often insecure by design and therefore offer many opportunities for attackers. In addition, there will be an increasing lack of transparency in the rapidly-evolving IoT ecosystem, with vague terms and conditions that allow organizations to use personal data in ways customers did not intend. It will be problematic for organizations to know what information is leaving their networks or what is being secretly captured and transmitted by devices such as smartphones, smart TVs or conference phones. When breaches occur, or transparency violations are revealed, organizations will be held liable by regulators and customers for inadequate data protection.

The Myth of Supply Chain Assurance 

Supply chains are a vital component of every organization’s global business operations and the backbone of today’s global economy. However, a range of valuable and sensitive information is often shared with suppliers and, when that information is shared, direct control is lost. In 2019, organizations will discover that assuring the security of their supply chain is a lost cause. Instead, it is time to refocus on managing their key data and understanding where and how it has been shared across multiple channels and boundaries, irrespective of supply chain provider.  This will cause many organizations to refocus on the traditional confidentiality and integrity components of the information security mix, placing an additional burden on already overstretched security departments.  Businesses that continue to focus on assuring supply chain security with traditional approaches, such as self certified audit and assurance, may preserve the illusion of security in the short term but will discover to their peril that the security foundations they believed to be in place were lacking.

A Continued Need to Involve the Board

The executive team sitting at the top of an organization has the clearest, broadest view. A serious, shared commitment to common values and strategies is at the heart of a good working relationship between the C-suite and the board. Without sincere, ongoing collaboration, complex challenges like cyber security will be unmanageable. Covering all the bases—defense, risk management, prevention, detection, remediation, and incident response—is better achieved when leaders contribute from their expertise and use their unique vantage point to help set priorities and keep security efforts aligned with business objectives.

Given the rapid pace of business and technology, and the countless elements beyond the C-suite’s control, traditional risk management simply isn’t nimble enough to deal with the perils of cyberspace activity. Enterprise risk management must build on a foundation of preparedness to create risk resilience by evaluating threat vectors from a position of business acceptability and risk profiling. Leading the enterprise to a position of readiness, resilience and responsiveness is the surest way to secure assets and protect people.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Unlocking the Power of Biometric Authentication with Behavior Analytics

InfoSec Island - Fri, 01/04/2019 - 6:47am

There are three common types of authentication: something you know (like a password), something you have (like a smart card), and something you are (like a fingerprint or another biometric method). Modern best practices recommend that you use at least two of these in parallel to be able to truly secure your identity as you logon to digital resources -- a practice otherwise known as two-factor authentication (2FA).

Biomerics exploded onto the scene in 2013 with the introduction of Apple’s iPhone 5S Touch ID fingerprint scanning technology. In 2017, Apple pushed facial recognition into the mainstream with its Face ID technology, introduced as the latest authentication feature in its iPhone X model. While common in many circles for years, Biometric technologies have now become widely recognized as more secure forms of authentication over the traditional password or token for a wider range of technology needs. But while we do all have a unique face, fingerprint, and irises, even basic biometric authentication has its limits.

Take, for example, the famous researcher from Yokohama National University, who created a graphite mold from a picture of a latent fingerprint on a wine glass that fooled scanners eight times out of ten. Or researchers at UNC, who built digital models of faces from Facebook photos that with 3D and VR technologies were convincing enough to bypass four out of the five authentication systems tested. These instances both highlight that basic biometric technology should not be considered a fool-proof security method.

Taking Biometrics to the Next Level

Fortunately, there is another form of biometrics that can be leveraged for authentication and is dynamic, changing continuously, but predictable over a long period of time. This is behavior biometrics, or the way users interact with their environment. Examples include the style and speed that users type a keyboard or the way they move and click their mouse.

Unlike basic biometrics such as a fingerprint or facial scanning that simply ask for authentication at the beginning of a task but have no on-going oversight into what is being done, behavioral biometrics can be analyzed throughout a given activity from start to finish. Through constant analysis of these dynamic behaviors, IT security teams can identify anomalies within the behaviors, alerting them to a potential intrusion or misuse of identities and enabling them to act quickly to remediate any issues.

In many cases, criminals can spend days, weeks or even months in the IT system before being detected. Continuous analysis of behavioral biometrics cripples a hackers’ ability to stay silent within the network.

Beyond Real-Time Detection

Behavior biometrics enables security analysts to produce false alerts and respond to the most important security risks. These teams are often already overwhelmed by thousands of false alerts generated by their existing security solutions, making it difficult to sort through the noise. Behavior biometrics equips security analysis with one of the most accurate ways to track potential threats -- anomalies -- and provides alerts without false or unnecessary flags.

As biometrics continues to gain popularity in the authentication world, it’s important to keep in mind that multi-factor authentication is critical and behavior biometrics alone are not enough to fully protect your business. The key is to always pair traditional authentication with either a password, token, SMS verification, smart card, or biometric authentication. Verifying users’ identities is critical to safeguarding today’s digital business, and two-factor authentication is vital to ensuring those identities are verified with the utmost accuracy.

About the author: Jackson Shaw is senior director of product management at One Identity, an identity and access management company formerly under Dell. Jackson has been leading security, directory and identity initiatives for 25 years.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Cybercriminals Hide Malware Commands in Malicious Memes

InfoSec Island - Thu, 01/03/2019 - 3:58pm

Trend Micro security researchers have discovered a new piece of malware that receives commands via malicious memes its operators published on Twitter. 

The method used to conceal malicious commands is called steganographyand has long been abused by cybercriminals to hide malicious payloads inside files in order to evade security solutions. Several years ago, security researchers observed the technique being abused in exploit kitand malvertising campaigns.

The use of social media platforms such as Twitter to send commands to malware isn’t new either. Malware that abuses such services has been aroundfor several years. 

As part of the newly analyzedattack, the actor published two memes (images that are humorous in nature) containing malicious commands on their Twitter account. The memes were published in late October, but the account had been created last year. 

The embedded command is parsed by the malware after the malicious meme is downloaded onto the victim’s machine. Detected as TROJAN.MSIL.BERBOMTHUM.AA, the malware itself wasn’t downloaded from Twitter, but managed to infect the victim’s machine via an unknown mechanism.

The memes contained the “/print” command, which instructs the malware to take screenshots of the infected machine’s desktop. The malware then sends the screenshots to a command and control (C&C) server address that it had obtained through a hard-coded URL on pastebin.com.

Once executed on an infected machine, the malware can download memes to extract and then execute the commands embedded inside. The URL address used in the attack is an internal or private IP address, which the security researchers believe is a temporary placeholder used by the attackers.

Based on the commands received via Twitter, the malware could capture the screen, retrieve a list of running processes, capture clipboard content, retrieve the username from infected machine, or retrieve filenames from a predefined path (such as desktop, %AppData% etc.), the security researchers reveal. 

Twitter has already suspended the account used in these attacks. 

“Users and businesses can consider adopting security solutions that can protect systems from various threats, such as malware that communicate with benign-looking images, through a cross-generational blend of threat defense techniques,” Trend Micro concludes. 

RelatedSundown Exploit Kit Starts Using Steganography

RelatedAndroid Botnet Uses Twitter for Receiving Commands

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

Miori IoT Botnet Targets Vulnerability in ThinkPHP

InfoSec Island - Thu, 01/03/2019 - 3:57pm

A recent variant of the Mirai botnet is targeting a remote code execution (RCE) vulnerability in the ThinkPHP framework, Trend Micro security researchers warn.

Dubbed Miori, the threat leverages a relatively new exploit that was published on December 11, and which targets ThinkPHP versions prior to 5.0.23 and 5.1.31. Other actors might also target ThinkPHP for their nefarious purposes, a recent surge in events related to the ThinkPHP RCE suggests. 

Miori, Trend Micro explains, is not the only Mirai offspring to use the same RCE exploit as their delivery method. Variants such as IZ1H9 and APEPwere observed employing it as well, and all use factory default credentials via Telnet in an attempt to spread to other devices via brute force.

As soon as the target machine is compromised, the malware ensnares it in a botnet that is capable of launching distributed denial-of-service (DDoS) attacks.

The emergence of a new Mirai variant is far from surprising. Ever since the malware’s source codewas posted online in October 2016, numerous variants spawned, including WickedSatoriOkiruMasuta, and others. Even cross-platform variantswere observed earlier this year. 

Miori, however, isn’t new, and Fortinet revealedin May a resemblance with another Mirai variant called Shinoa. Now, Trend Micro discovered that the malware has adopted said ThinkPHP RCE to spread to vulnerable machines, which shows that its author continues to improve their code.

Once executed, Miori starts Telnet to brute force other IP addresses. The malware was also observed listening on port 42352 (TCP/UDP) for commands from its command and control (C&C) server and sending the command “/bin/busybox MIORI” to verify infection of targeted system.

After decrypting Miori’s configuration table, Trend Micro’s security researchers found a series of strings revealing some of the malware’s functionality, as well as a list of usernames and passwords the threat uses, some of which are default and easy-to-guess.

The analysis also revealed two URLs used by the IZ1H9 and APEP variants too, which led the researchers to discover that both use the same string deobfuscation technique as Mirai and Miori. 

The APEP variant, the security researchers explain, does not rely solely on brute-force via Telnet for distribution, but also targets CVE-2017-17215, a RCE vulnerability that impacts Huawei HG532 router devices. The same vulnerability was previously said to have been abused in Satoriand Brickerbotattacks. 

“Mirai has spawned other botnets that use default credentials and vulnerabilities in their attacks. Users are advised to change the default settings and credentials of their devices to deter hackers from hijacking them. As a general rule, smart device users should regularly update their devices to the latest versions,” Trend Micro concludes. 

RelatedMirai Authors Avoid Prison After Working With FBI

RelatedMirai Variants Continue to Spawn in Vulnerable IoT Ecosystem

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

2019, The Year Ahead in Cloud Security

InfoSec Island - Wed, 01/02/2019 - 10:58am

By 2020, more than four-fifths of enterprise data will be cloud-based. This transition is happening even though the cloud is a relatively infant technology.

As the cloud explodes into ubiquity, black hat hackers, criminal organizations, and nation-states are adjusting their aim. Like parasites, they are targeting hosts with the healthiest, most substantial portions of data. They will soon compromise the cloud more than any other data stronghold.

In 2019, cloud security will confound organizations as concerns over data privacy, and the security skills gap pervade. But the cloud will come to its rescue.

Picking up the cloud security tab

According to Gartner, in 2019, global spending on cloud security will reach $459 million, up from $304 million in 2018. Gartner also predicts that subscription and managed services including Software-as-a-service (SaaS) will make up more than half of all security software delivered by 2020.  The Gartner data aligns with figures from Bain & Company that confirm an increase in cloud services, including SaaS offerings.

Looking ahead, the cloud will prove to be the best solution to its security challenges. We will see more security solutions moving to the cloud to become more agile, extensible and scalable. As attackers are adjusting their aim, this is the type of security businesses will start to demand for their cloud data, applications, and users. Eventually, most security will be cloud-based and ultimately, cloud-based security will protect the cloud more often than all other approaches combined.

The Impact of the Cloud Security Skills Gap

There are number of factors influencing the progression toward cloud-based security including the market’s response to the skills shortage. As more organizations consider migrating to the cloud, the cloud security skills gap is an area of concern for many. Because of this, we see a lot of companies outsourcing to the cloud to compensate for skills they lack in-house.

In 2019, enterprises will increasingly contract cloud-enabled security vendors to leverage the cloud and cloud security expertise to secure cloud environments.

Securing Cloud Workloads While Remaining Compliant

As more of an organizations data moves to the cloud, security and privacy will continue to press companies to find solutions that secure their cloud workloads and aid them in their compliance with privacy regulations. According to Crowd Research Partners, threats to data privacy are second on the list of the top three cloud security challenges -  protecting against data loss/leakage and breaches of confidentiality are the first and third challenges, respectively.

It’s no surprise that Europe’s new GDPR will continue to create confusion for companies trying to figure out cloud security. Organizations will need the blessing of European citizens to compile and store their data in the cloud.

Spend wisely and follow the smart money. There will be a trend to invest in cloud-based security to mitigate cloud security fears. 

About the AuthorPeter Martini is president and co-founder of iboss. He has been awarded dozens of patents focused on network and mobile security, and with his brother, has been recognized by the industry with several prestigious awards, including, Ernst & Young’s Entrepreneur of the Year and one of Goldman Sachs 100 Most Intriguing Entrepreneurs.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island

The End (of 2018) Is Near: Looking Back for Optimism

InfoSec Island - Mon, 12/31/2018 - 6:28am

Prognostication is risky business. Just days after I originally put together my list of 2019 predictions for the cybersecurity world of 2019, Marriott, Dell, Dunkin’ and Quora trashed my carefully crafted analysis.

This is further evidence that predicting events and issues based on unpredictable human behaviors is like picking your spouse on a blind date. Sure, you might be right, but you are just as likely to make a disastrous choice.

This time last year, I gazed deeply into my company’s Crystal Ball, read the tea leaves in my cup, and boldly predicted five circumstances. Three of them came true in full force:

  1. Government regulations would drive behaviors. The reaction to GDPR, NY DFS, CaCPA, CaSCD and serious talk of a US federal privacy law is proof that institutional behaviors are changing and will continue to do so.
     
  2. Patching will be the Achilles heel of applications. Known CVEs continued to the root cause of cyberattacks.
     
  3. More of the same problems as in previous years – a no brainer, unfortunately, since organizations still get caught doing stupid things (cough) Cathay Pacific (cough).

I’m claiming partial credit for the two other 2018 predictions: “Out-of-support software is the next frontier for attacks” and “IoT and Ransomware attacks will (still) be a threat” – and I’m updating them for 2019.

A list of 2019 predictions could easily include all of the same predictions as 2018, but that implies we are not making headway in solving the primary issues that security teams face every day. The reality is, though, we are making progress against cyberattacks. 

Despite the recent meme-inspiring breaches that added more than 600 million records to the wild, the number of breaches reported in 2018 will be down significantly for the first time since 2011. That didn’t happen by accident.

Businesses are accelerating efforts to take care of the root cause of most cyberattacks – known, but unpatched CVEs – in a more rapid and efficient manner. Research released late in 2018 proves it: 321 hours (or ~$20K) per week is spent (average) on patching CVEs; 30% of the most severe CVEs are patched within 30 days, a double digit improvement. 

The number of reported CVEs is also likely to finish the year flat to slightly down for the first time in four years based on stats from the National Vulnerability Database. This too is evidence that the testing tools and focus on improving the development process is working. Here again, automation has great potential to turn a one-year change in direction into a trend.

Progress, though, is not always linear or steady. While we wait to see if 2018 is a one-off or a movement, let’s look at what to expect in 2019. 

  1. Fewer data breaches… 
    If the current trends hold true to the end of 2018, we will see the first year-over-year drop in reported data losses since 2011. 
     
  2. …but bigger data losses.
    The number of security breaches may be down but, the size of data losses per attack is growing. Even adjusting for the 2017 Equifax and the 2018 Marriott breaches, the number of records lost per attack/breach will double in 2018. Expect that trend to continue into 2019.
     
  3. Unpatched vulnerabilities will get you media attention you don’t want.
    The latest numbers from The Ponemon Institute tells the story; security leaders around the world say that manual patching processes create risk – yet they continue to invest in headcount instead of automated tools like runtime virtual patchesthat can fix, not just patch, known code flaws with no downtime. Ponemon calls this the Patching Paraox.
     
  4. The security and compliance risks from Legacy Java applications only get bigger.
    Depending on whose measuring stick you use, Java 8 accounts for between 79 percent and 84 percent of Java-based applications, with a little more than 40 percent still being written in Java 6 or Java 7!  With no backwards compatibility in Java 11, enterprises with legacy apps (which is most organizations) face a dilemma – what to do with out-of-support, but mission critical applications?
     
  5. More of the same with a touch of “Huh?”
    In a world where SQL injection and Cross Site Scripting vulnerabilities continue to plague between 30 and 50 percent of all applications, we’re going to see more of the same in 2019.  But there will be surprises, too, says Captain Obvious. It could be that ransomware attacks will shift from primarily end-point vulnerabilities to server threats. Will we see a surge in DDoS attacks linked to the IoT after a year of relative calm in 2018?  And what about critical infrastructure attacks from for-profit hackers and Nation/States?

The Institute of Operations Management advises that “there are two types of forecasts: lucky or wrong.” Let’s reconvene in a year to see which we are.

About the author: James E. Lee is the Executive Vice President and Global CMO at Waratek. He was theformer CMO at data pioneer ChoicePoint and an expert in data privacy and security, having served nine years on the Board of the San Diego-based Identity Theft Resource Center including three years as Chair. Lee has served as a leader of two ANSI efforts to address issues of data privacy and identity management.

Copyright 2010 Respective Author at Infosec Island
Categories: InfoSec Island