Microsoft

Microsoft Copilot for Security is generally available on April 1, 2024, with new capabilities

Microsoft Malware Protection Center - Wed, 03/13/2024 - 12:00pm

Today, we are excited to announce that Microsoft Copilot for Security will be generally available worldwide on April 1, 2024. The industry’s first generative AI solution will help security and IT professionals catch what others miss, move faster, and strengthen team expertise. Copilot is informed by large-scale data and threat intelligence, including more than 78 trillion security signals processed by Microsoft each day, and coupled with large language models to deliver tailored insights and guide next steps. With Copilot, you can protect at the speed and scale of AI and transform your security operations.

Microsoft Copilot for Security

Powerful new capabilities, new integrations, and industry-leading generative AI—generally available on April 1, 2024.

Learn more

We are inspired by the results of our second Copilot for Security economic study, which shows that experienced security professionals are faster and more accurate when using Copilot, and they overwhelmingly want to continue using Copilot. The gains are truly amazing:

  • Experienced security analysts were 22% faster with Copilot.
  • They were 7% more accurate across all tasks when using Copilot.
  • And, most notably, 97% said they want to use Copilot the next time they do the same task.

This new study focuses on experienced security professionals and expands the randomized controlled trial we published last November, which focused on new-in-career security professionals. Both studies measured the effects on productivity when analysts performed security tasks using Copilot for Security compared to a control group that did not. The combined results of both studies demonstrate that everyone—across all levels of experience and types of expertise—can make gains in security with Copilot. When we put Copilot in the hands of security teams, we can break down barriers to entry and advancement, and improve the work experience for everyone. Copilot enables security for all.

Copilot for Security is now pay-as-you-go

Toward our goal of enabling security for all, Microsoft is also introducing a provisioned pay-as-you-go licensing model that makes Copilot for Security accessible to a wider range of organizations than any other solution on the market. With this flexible, consumption-based pricing model, you can get started quickly, then scale your usage and costs according to your needs and budget. Microsoft Copilot for Security will be available for purchase starting April 1, 2024. Connect with your account representative now so your organization can be among the first to enjoy the incredible gains from Copilot for Security.

Global availability and broad ecosystem

General availability means Copilot for Security will be available worldwide on April 1, 2024. Copilot is multilingual and can process prompts and respond in eight languages with a multilingual interface for 25 different languages, making it ready for all major geographies across North and South America, Europe, and Asia.

Copilot has grown a broad, global ecosystem of more than 100 partners consisting of managed security service providers and independent software vendors. We are so grateful to the partners who continue to play a vital role in empowering everyone to confidently adopt safe and responsible AI.

Partners can learn more about integrating with Copilot.

New Copilot for Security product innovations

Microsoft Copilot for Security helps security and IT professionals amplify their skillsets, collaborate more effectively, see more, and respond faster.

As part of general availability, Copilot for Security includes the following new capabilities:

  • Custom promptbooks allow customers to create and save their own series of natural language prompts for common security workstreams and tasks.
  • Knowledgebase integrations, in preview, empowers you to integrate Copilot for Security with your business logic and perform activities based on your own step-by-step guides.
  • Multi-language support now allows Copilot to process prompts and respond in eight different languages with 25 languages supported in the interface.  
  • Third-party integrations from global partners who are actively developing integrations and services.
  • Connect to your curated external attack surface from Microsoft Defender External Attack Surface Management to identify and analyze the most up-to-date information on your organization’s external attack surface risks.
  • Microsoft Entra audit logs and diagnostic logs give additional insight for a security investigation or IT issue analysis of audit logs related to a specific user or event, summarized in natural language.
  • Usage reporting provides dashboard insights on how your teams use Copilot so that you can identify even more opportunities for optimization.

To dive deeper into the above announcement and learn about pricing, read the blog on Tech Community. Read the full report to dig into the complete results of our research study or view the infographic. To learn more about Microsoft Copilot for Security, visit our product page or check out our solutions that include Copilot. If you’re interested in a demo or are ready to purchase, please contact your sales representative.

“Threat actors are getting more sophisticated. Things happen fast, so we need to be able to respond fast. With the help of Copilot for Security, we can start focusing on automated responses instead of manual responses. It’s a huge gamechanger for us.” 

—Mario Ferket, Chief Information Security Officer, Dow  AI-powered security for all

With general availability, Copilot for Security will be available as two rich user experiences: in an immersive standalone portal or embedded into existing security products.

Integration of Copilot with Microsoft Security products will make it even easier for your IT and security professionals to take advantage of speed and accuracy gains demonstrated in our study. Enjoy the product portals you know and love, now enhanced with Copilot capabilities and skills specific to use cases for each product.

The unified security operations platform, coming soon, delivers an embedded Copilot experience within the Microsoft Defender portal for security information and event management (SIEM) and extended detection and response (XDR) that will prompt users as they investigate and respond to threats. Copilot automatically surfaces relevant details for summaries, drives efficiency with guided response, empowers analysts at all levels with natural language to Kusto Query Language (KQL) and script and file analysis, and now includes the ability to assess risks with the latest Microsoft threat intelligence.

Copilot in Microsoft Entra user risk investigation, now in preview, helps you prevent identity compromise and respond to threats quickly. This embedded experience in Microsoft Entra provides a summary in natural language of the user risk indicators and tailored guidance for resolving the risk. Copilot also recommends ways to automate prevention and resolution for future identity attacks, such as with a recommended Microsoft Entra Conditional Access policy, to increase your security posture and keep help desk calls to a minimum.

To help data security and compliance administrators prioritize and address critical alerts more easily, Copilot in Microsoft Purview now provides concise alert summaries, integrated insights, and natural language support within their trusted investigation workflows with the click of a button.

Copilot in Microsoft Intune, now in preview, will help IT professionals and security analysts make better-informed decisions for endpoint management. Copilot in Intune can simplify root cause determination with complete device context, error code analysis, and device configuration comparisons. This makes it possible to detect and remediate issues before they become problems.

Discover, protect, and govern AI usage

As more generative AI services are introduced in the market for all business functions, it is crucial to recognize that as this technology brings new opportunities, it also introduces new challenges and risks. With this in mind, Microsoft is providing customers with greater visibility, protection, and governance over their AI applications, whether they are using Microsoft Copilot or third-party generative AI apps. We want to make it easier for everyone to confidently and securely adopt AI.

To help organizations protect and govern the use of AI, we are enabling the following experiences within our portfolio of products:

  • Discover AI risks: Security teams can discover potential risks associated with AI usage, such as sensitive data leaks and users accessing high-risk applications.
  • Protect AI apps and data: Security and IT teams can protect the AI applications in use and the sensitive data being reasoned over or generated by them, including the prompts and responses.
  • Govern usage: Security teams can govern the use of AI applications by retaining and logging interactions with AI apps, detecting any regulatory or organizational policy violations when using those apps, and investigating any new incidents.

At Microsoft Ignite in November 2023, we introduced the first wave of capabilities to help secure and govern AI usage. Today, we are excited to announce the new out-of-the-box threat detections for Copilot for Microsoft 365 in Defender for Cloud Apps. This capability, along with the data security and compliance controls in Microsoft Purview, strengthens the security of Copilot so organizations can work on all types of data, whether sensitive or not, in a secure and responsible way. Learn more about how to secure and govern AI.

Expanded end-to-end protection to help you secure everything

Microsoft continues to expand on our long-standing commitment to providing customers with the most complete end-to-end protection for your entire digital estate. With the full Microsoft Security portfolio, you can gain even greater visibility, control, and governance—especially as you embrace generative AI—with solutions and pricing that fit your organization. New or recent product features include:

Microsoft Security Exposure Management is a new unified posture and attack surface management solution within the unified security operations platform that gives you insights into your overall assets and recommends priority security initiatives for continuous improvement. You’ll have a comprehensive view of your organization’s exposure to threats and automatic discovery of critical assets to help you proactively improve your security posture and lower the risk of exposure of business-critical assets and sensitive data. Visualization tools give you an attacker’s-eye view to help you investigate exposure attempts and uncover potential attack paths to critical assets through threat modeling and proactive risk exploration. It’s now easier than ever to identify exposure gaps and take action to minimize risk and business disruption.

Adaptive Protection, a feature of Microsoft Purview, is now integrated with Microsoft Entra Conditional Access. This integration allows you to better safeguard your organization from insider risks such as data leakage, intellectual property theft, and confidentiality violations. With this integration, you can create Conditional Access policies to automatically respond to insider risks and block user access to applications to secure your data.

Microsoft Communication Compliance now provides both sentiment indicators and insights to enrich Microsoft Purview Insider Risk Management policies and to identify communication risks across Microsoft Teams, Exchange, Microsoft Viva Exchange, Copilot, and third-party channels. 

Microsoft Intune launched three new solutions in February as part of the Microsoft Intune Suite: Intune Enterprise Application Management, Microsoft Cloud PKI, and Intune Advanced Analytics. Intune Endpoint Privilege Management is also rolling out the option to enable support approved elevations.

Security for all in the age of AI

Microsoft Copilot for Security is a force multiplier for the entire Microsoft Security portfolio, which integrates more than 50 categories within six product families to form one end-to-end Microsoft Security solution. By implementing Copilot for Security, you can protect your environment from every angle, across security, compliance, identity, device management, and privacy. In the age of AI, it’s more important than ever to have a unified solution that eliminates the gaps in protection that are created by siloed tools.

The coming general availability of Copilot on April 1, 2024, is truly a milestone moment. With Copilot, you and your security team can confidently lead your organization into the age of AI. We will continue to deliver on Microsoft’s vision for security: to empower defenders with the advantage of industry-leading generative AI and to provide the tools to safely, responsibly, and securely deploy, use, and govern AI. We are so proud to work together with you to drive this AI transformation and enable security for all.

Join us April 3, 2024, at the Microsoft Secure Tech Accelerator for a deep dive into technical information that will help you and your team implement Copilot. Learn how to secure your AI, see demonstrations, and ask our product team questions. RSVP now.

Microsoft Secure

Watch the second annual Microsoft Secure digital event to learn how to bring world-class threat intelligence, complete end-to-end protection, and industry-leading, responsible AI to your organization.

Watch now

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft Copilot for Security is generally available on April 1, 2024, with new capabilities appeared first on Microsoft Security Blog.

Categories: Microsoft

International Women’s Day: Expanding cybersecurity opportunities in the era of AI

Microsoft Malware Protection Center - Fri, 03/08/2024 - 12:00pm

March is a meaningful month for me personally as we honor Women’s History Month and International Women’s Day. Some of the most powerful role models in my own life are the women who raised me and the community of women who’ve provided the support and encouragement that continues to empower me to believe that I can be anything I aspire to. In security, this is particularly important because women are still underrepresented and so critical to the future of our industry. I’ve had the great fortune of working with many wonderful women throughout my career and one of the things I find so often to be true is that the path to a career in security does not have to be a linear one. There is no right way to come into this industry, no one background or training ground required. In fact, diversity of experiences and perspectives are the critical secret sauce to building a safer world for everyone.  

Here are just a few examples of incredible women at Microsoft who may never have envisioned cybersecurity as their destination when they were starting out:

  • In our recent Cyber Signals briefing, I had the privilege of talking to Homa Hayatyfar, Principal Detection Analytics Manager, Microsoft, who has seen how her pathway to a career in cybersecurity was nonlinear. She arrived at her career in cybersecurity by way of a research background in biochemistry and molecular biology—along with a passion for solving complex puzzles—and she believes that may be what the industry needs more of.
  • From our threat intelligence team, Fanta Orr, Intelligence Analysis Director, who improves the understanding of and protection against nation-state cyberthreats to Microsoft customers and the global digital ecosystem. She’s a seasoned foreign affair professional, who spent well over a decade in United States government service before pivoting over to cyberthreat analysis.
  • When Sherrod DeGrippo, Director of Threat Intelligence Strategy, began studying fine arts in college, internet access was a rare luxury and the cybersecurity field as we know it today was just emerging. She developed a dual interest in the new world of online communication and do-it-yourself computing after her first experience with bulletin board systems at 14 years old. She thinks her fine arts education helps her discover new ideas and methods for threat intelligence, after more than 20 years in cybersecurity and an unplanned role in incident response. 

“Threat intelligence is about taking subjective information and turning it into objective protections. Ultimately, it’s data-driven intuition and it’s extremely powerful. Women learn this skill early, and in so many areas of life they’re natural threat intelligence analysts.”

—Sherrod DeGrippo, Director of Threat Intelligence Strategy, Microsoft

These cyberdefenders work every day to keep our world safe and also support and mentor other women to create their own trails and pathways. I invite you to follow them on LinkedIn and attend the Women in Cybersecurity (WyCiS) conference presentations and RSA Conference, where many of these amazing women will share their stories over the next few months.

We have made a lot of progress, but there is still much more opportunity

A huge opportunity still exists to welcome more women into cybersecurity. More than 4 million cybersecurity jobs are available globally.2 These are roles that women can help fill and triumph in, but we must lay the groundwork to make such roles an attractive and available career option, and to help change the perception of what it takes to succeed.

While there’s been steady progress over the past few years, women fill just 21% of cybersecurity leadership roles and only 17% of board member positions in cybersecurity.3 In 2022, Microsoft Security commissioned a survey to explore the reasons behind the gender gap in cybersecurity skills. Just 44% of women who responded said they feel adequately represented in the industry.

Several factors contribute to fewer women joining the cybersecurity profession than men:

  • 28% of respondents believed parents were more encouraging of sons than daughters to explore technology and cybersecurity fields.
  • Women lacking cybersecurity role models, including women in leadership roles.
  • Implicit bias in the hiring process and a belief that men are a better fit for roles related to technology.
We need to create a pathway to success

By fostering an environment that welcomes women into cybersecurity, we break down barriers and build stronger, more resilient cyber defense mechanisms. Diversity isn’t about filling quotas; it’s about building resilient, innovative teams capable of outthinking and outmaneuvering cyberadversaries. It’s up to us to shift the view that cybersecurity is too demanding—especially as AI can help to alter this balance. And it’s past time to change the perception that cybersecurity is a field of hacker men in hoodies in their basement.

We need to continue to be role models and allies for underrepresented groups, especially for those from underprivileged backgrounds. There is often no easy way for underprivileged aspiring entrants to practice their craft from a young age and eventually enter science, technology, engineering, and mathematics (STEM) fields, regardless of other factors. To change this, we need to invest and support the many not-for-profits that help those from underprivileged backgrounds.

Inspiring the next generation of cybersecurity professionals

During the past year, Microsoft has partnered with many organizations similarly committed to building a more diverse cybersecurity workforce. One huge way these organizations are doing that is by offering training to girls so they can become the next generation of cybersecurity professionals through programs like GirlSecurity, TechTogether, and IGNITE Worldwide (Inspiring Girls Now in Technology Evolution).

Another initiative we support through partnerships offers training to women interested in switching careers or upskilling their cybersecurity knowledge through programs like WiCyS and Executive Women’s Forum (EWF). We also partner with global education programs, including CyberShikshaa in India and WOMCY in Latin America, to empower women and minorities in cybersecurity.

These programs and initiatives have a tremendous impact on encouraging more girls to consider careers in cybersecurity and getting more women to join the cybersecurity workforce. Among other benefits, they help girls and women build confidence, meet female cybersecurity role models, develop or enhance their skills, and gain experience to add to their resumes.

To further develop women’s careers, Microsoft Philanthropies and Women in Cloud jointly sponsor the Women in Cloud Cybersecurity Scholarship to provide women with structured skills development, certification opportunities, and employability readiness coaching. By 2025, more than 5,100 scholarships will be awarded.

The momentum is due in part to community-wide efforts to increase the number of women and diverse employees in cybersecurity roles. Community organizations like Blacks in Cybersecurity (BIC) and WiCyS play a crucial role in providing pipelines for marginalized groups to enter the cybersecurity field.

AI as an ally in cybersecurity diversity

AI is revolutionizing how we approach cybersecurity, from predictive analytics to automated threat detection. Yet beyond algorithms and data models, there’s an urgent need for human insight. According to a study from Utica University, women with their unique perspective also have strong analytical and problem-solving skills, which are essential for identifying and addressing security threats, and tend to have a more risk-averse approach, which can help to reduce the likelihood of human error in security operations.4 These unique perspectives help to shape our AI for security and help to ensure that AI is inclusive, fair, reliable, and safe, transparent and inclusive. We also believe that creators of AI, as well as its users, must hold themselves to a standard of accountability. And within that context, the possibilities for this exciting technology are limitless.

Happy International Women’s Day! While progress is being made—and opportunities are opening—for women and minorities in cybersecurity, much can still be done to overcome barriers to entry for these groups. Let’s continue to work for more representation in cybersecurity by forging new paths with more allies.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

1Women to Watch in Cybersecurity, Forbes. October 26, 2022.

2How the Economy, Skills Gap and Artificial Intelligence are Challenging the Global Cybersecurity Workforce, ISC2. 2023.

3International Women’s Day: Only One-Fifth of Cybersecurity Leadership Roles Filled by Women, IT Security Guru. March 8, 2023.

4Why we need more women in cybersecurity TechBeacon.

The post International Women’s Day: Expanding cybersecurity opportunities in the era of AI appeared first on Microsoft Security Blog.

Categories: Microsoft

Evolving Microsoft Security Development Lifecycle (SDL): How continuous SDL can help you build more secure software

Microsoft Malware Protection Center - Thu, 03/07/2024 - 12:00pm

The software developers and systems engineers at Microsoft work with large-scale, complex systems, requiring collaboration among diverse and global teams, all while navigating the demands of rapid technological advancement, and today we’re sharing how they’re tackling security challenges in the white paper: “Building the next generation of the Microsoft Security Development Lifecycle (SDL)”, created by pioneers of future software development practices.

Two decades of evolution

It’s been 20 years since we introduced the Microsoft Security Development Lifecycle (SDL)—a set of practices and tools that help developers build more secure software, now used industry-wide. Mirroring the culture of Microsoft to uphold security and born out of the Trustworthy Computing initiative, the aim of SDL was—and still is—to embed security and privacy principles into technology from the start and prevent vulnerabilities from reaching customers’ environments.

In 20 years, the goal of SDL hasn’t changed. But the software development and cybersecurity landscape has—a lot.

With cloud computing, Agile methodologies, and continuous integration/continuous delivery (CI/CD) pipeline automation, software is shipped faster and more frequently. The software supply chain has become more complex and vulnerable to cyberattacks. And new technologies like AI and quantum computing pose new challenges and opportunities for security.

SDL is now a critical pillar of the Microsoft Secure Future Initiative, a multi-year commitment that advances the way we design, build, test, and operate our Microsoft Cloud technology to ensure that we deliver solutions meeting the highest possible standard of security.

Next generation of the Microsoft SDL

Learn how we're tackling security challenges.

Read the white paper Continuous evaluation

Microsoft has been evolving the SDL to what we call “continuous SDL”. In short, Microsoft now measures security state more frequently and throughout the development lifecycle. Why? Because times have changed, products are no longer shipped on an annual or biannual basis. With the cloud and CI/CD practices, services are shipped daily or sometimes multiple times a day.

Data-driven methodology

To achieve scale across Microsoft, we automate measurement with a data-driven methodology when possible. Data is collected from various sources, including code analysis tools like CodeQL. Our compliance engine uses this data to trigger actions when needed.

CodeQL: A static analysis engine used by developers to perform security analysis on code outside of a live environment.

While some SDL controls may never be fully automated, the data-driven methodology helps deliver better security outcomes. In pilot deployments of CodeQL, 92% of action items were addressed and resolved in a timely fashion. We also saw a 77% increase in CodeQL onboarding amongst pilot services.

Transparent, traceable evidence

Software supply chain security has become a top priority due to the rise of high-profile attacks and the increase in dependencies on open-source software. Transparency is particularly important, and Microsoft has pioneered traceability and transparency in the SDL for years. Just as one example, in response to Executive Order 14028, we added a requirement to the SDL to generate software bills of material (SBOMs) for greater transparency.

But we didn’t stop there.

To provide transparency into how fixes happen, we now architect the storage of evidence into our tooling and platforms. Our compliance engine collects and stores data and telemetry as evidence. By doing so, when the engine determines that a compliance requirement has been met, we can point to the data used to make that determination. The output is available through an interconnected “graph”, which links together various signals from developer activity and tooling outputs to create high-fidelity insights. This helps us give customers stronger assurances of our security end-to-end.

Modernized practices

Beyond making the SDL automated, data-driven, and transparent, Microsoft is also focused on modernizing the practices that the SDL is built on to keep up with changing technologies and ensure our products and services are secure by design and by default. In 2023, six new requirements were introduced, six were retired, and 19 received major updates. We’re investing in new threat modeling capabilities, accelerating the adoption of new memory-safe languages, and focusing on securing open-source software and the software supply chain.

We’re committed to providing continued assurance to open-source software security, measuring and monitoring open-source code repositories to ensure vulnerabilities are identified and remediated on a continuous basis. Microsoft is also dedicated to bringing responsible AI into the SDL, incorporating AI into our security tooling to help developers identify and fix vulnerabilities faster. We’ve built new capabilities like the AI Red Team to find and fix vulnerabilities in AI systems.

By introducing modernized practices into the SDL, we can stay ahead of attacker innovation, designing faster defenses that protect against new classes of vulnerabilities.

How can continuous SDL benefit you?

Continuous SDL can help you in several ways:

  • Peace of mind: You can continue to trust that Microsoft products and services are secure by design, by default, and in deployment. Microsoft follows the continuous SDL for software development to continuously evaluate and improve its security posture.
  • Best practices: You can learn from Microsoft’s best practices and tools to apply them to your own software development. Microsoft shares its SDL guidance and resources with the developer community and contributes to open-source security initiatives.
  • Empowerment: You can prepare for the future of security. Microsoft invests in new technologies and capabilities that address emerging threats and opportunities, such as post-quantum cryptography, AI security, and memory-safe languages.
Where can you learn more?

For more details and visual demonstrations on continuous SDL, read the full white paper by SDL pioneers Tony Rice and David Ornstein.

Learn more about the Secure Future Initiative and how Microsoft builds security into everything we design, develop, and deploy.

The post Evolving Microsoft Security Development Lifecycle (SDL): How continuous SDL can help you build more secure software appeared first on Microsoft Security Blog.

Categories: Microsoft

Enhancing protection: Updates on Microsoft’s Secure Future Initiative

Microsoft Malware Protection Center - Wed, 03/06/2024 - 12:00pm

At Microsoft, we’re continually evolving our cybersecurity strategy to stay ahead of threats targeting our products and customers. As part of our efforts to prioritize transparency and accountability, we’re launching a regular series on milestones and progress of the Secure Future Initiative (SFI)—a multi-year commitment advancing the way we design, build, test, and operate our technology to help ensure that we deliver secure, reliable, and trustworthy products and services, enabling our customers to achieve their digital transformation goals and protect their data and assets from malicious actors. 

Secure Future Initiative

A new world of security.

Learn more

Microsoft’s mission to empower every person and every organization on the planet to achieve more depends on security. We recognize that when Microsoft plays a role in pioneering cutting-edge technology, we also have the responsibility to lead the way in protecting our customers and our own infrastructure from cyberthreats. Against the exponentially increasing pace, scale, and complexity of the security landscape, it’s critical that we evolve to be more dynamic, proactive, and integrated in our security model to continue meeting the changing needs and expectations of our customers and the market. Our rich history in innovation is a testament to our commitment to delivering impactful and trustworthy products and services that that shape industries and transform lives. This legacy continues as we consistently work to set new benchmarks for safeguarding our digital future.

Expanding upon our foundation of built-in security, in November 2023 we launched the Secure Future Initiative (SFI) to directly address the escalating speed, scale, and sophistication of cyberattacks we’re witnessing today. This initiative is an anticipatory strategy reflecting the actions we are taking to “build better and respond better” in security, using automation and AI to scale this work, and strengthen identity protection against highly sophisticated cyberattacks. It’s not about tailoring our defenses to a single cyberattack: SFI underscores the importance of a continually and proactively evolving security model that adapts to the ever-changing digital landscape.

Four months have passed since we introduced SFI, and the achievements in our engineering developments demonstrate the concrete actions we’ve implemented to make sure that Microsoft’s security infrastructure stays strong in a constantly changing digital environment.  Read more below for updates on the initiative.

Transforming software development with automation and AI

As noted in our November 2, 2023 SFI announcement, we’re evolving our security development lifecycle (SDL) to continuous SDL—which we define as applying systematic processes to continuously integrate cybersecurity protection against emerging threat patterns as our engineers code, test, deploy, and operate our systems and service. Read more about continuous SDL here.

As part of our evolution to continuous SDL, we’re deploying CodeQL for code analysis to 100% of our commercial products. CodeQL is a powerful static analysis tool in the software security space. It offers advanced capabilities across numerous programming languages that detect complex security mistakes within source code. While our code repos go through rigorous SDL assessment leveraging traditional tooling, as part of our SFI work we now use CodeQL to cover 86% of our Azure DevOps code repositories from our commercial businesses in our Cloud and AI, enterprise and devices, security and strategic missions, and technology groups. We are expanding this further and anticipate that completing the consolidation process of the last 14% will be a complex, multi-year journey due to specific code repositories and engineering tools requiring additional work. In 2023, we onboarded more than one billion lines of source code to CodeQL, which highlights our commitment toward progress.

As part of efforts to broaden adoption of memory safe languages, we donated USD1 million in December 2023 to the Rust Foundation, an integral partner in stewarding the Rust programming language. Additionally, we’re providing an additional USD3.2 million to the Alpha-Omega project. In partnership with the Open Source Security Foundation (OpenSSF) and co-led with Google and Amazon, Alpha-Omega’s mission is to catalyze security improvements to the most widely deployed open source software projects and ecosystems critical to global infrastructure. Our contribution this year will help expand coverage, more than doubling the number of widely deployed open source projects we analyze, including 100 of the most commonly used open source AI libraries. The Alpha-Omega 2023 Annual Report highlights security and process improvements from last year and strides toward fostering a sustainable culture of security within open source communities.  

Together, our SFI-driven advances in expanding continuous SDL, fostering secure open source updates, and adopting memory safe languages strengthen the foundation of software throughout Microsoft’s own products and platforms, as well as the wider industry.

Strengthening identity protection against highly sophisticated attacks

As part of our SFI engineering advances, we’re enforcing the use of standard identity libraries such as the Microsoft Authentication Library (MSAL) enterprise-wide across Microsoft. This initiative is pivotal in achieving a cohesive and reliable identity verification framework. It facilitates seamless, policy-compliant management of user, device, and service identities across all Microsoft platforms and products, ensuring a fortified and consistent security posture.

Our efforts have already seen noteworthy achievements in several key areas. We’ve reached a major milestone with full integration of MSAL into Microsoft 365 across all four major platforms: Windows, macOS, iOS, and Android marking a significant advancement toward universal standardization. This integration ensures that Microsoft 365 applications are underpinned by a unified authentication mechanism. In the Azure ecosystem, encompassing critical tools such as Microsoft Visual Studio, Azure SDK, and Microsoft Azure CLI, MSAL has been fully adopted, underscoring our commitment to secure and streamlined authentication processes within our development tools. Furthermore, over 99% of internal service-to-service authentication requests, using Microsoft Entra for authorization, now utilize MSAL, highlighting our dedication to boosting security and efficiency in inter-service communications. Ultimately, these milestones further harden identity and authorization across our vast estate, making it increasingly difficult for threats and intruders to move between users and systems.

Looking ahead, we’re setting ambitious objectives to further bolster our security infrastructure. By the end of this year, we aim to fully automate the management of Microsoft Entra ID and Microsoft Account (MSA) keys. This process will include rapid rotation and secure storage of keys within Hardware Security Modules (HSMs), significantly enhancing our security measures. Additionally, we’re on track to ensure that Microsoft’s most widely used applications transition to standard identity libraries by the end of the year. Through these collective efforts we aim to not only enhance security but also improve the user experience and streamline authentication processes across our product suite.

Stay up to date on the latest Secure Future Initiative updates

As we forge ahead with the SFI, Microsoft remains unwavering in its commitment to continuously evolve our security posture and provide transparency in our communications. We’re dedicated to innovating, protecting, and leading in an era where digital threats are constantly changing. The progress we’ve shared today is only a fraction of our comprehensive strategy to safeguard the digital infrastructure and our customers who rely on it.

In the coming months, we will continue to share our progress on enhancing our capabilities, deploying innovative technologies, and strengthening our collaborations to address the complexities of cybersecurity. We’re committed to building a safer, more resilient digital world, with a focus on transparency and safety in every step.

To learn more  about the Microsoft SFI and read more details on our three engineering advances, visit our built-in security site.

Learn more about Microsoft Security solutions and bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Enhancing protection: Updates on Microsoft’s Secure Future Initiative appeared first on Microsoft Security Blog.

Categories: Microsoft

​​Secure SaaS applications with Valence Security and Microsoft Security​​

Microsoft Malware Protection Center - Tue, 03/05/2024 - 12:00pm

This blog post is part of the Microsoft Intelligent Security Association guest blog series. Learn more about MISA.  

Software as a service (SaaS) adoption has accelerated at a lightning speed, enabling collaboration, automation, and innovation for businesses large and small across every industry vertical—from government, education, financial service to tech companies. Every SaaS application is now expanding its offering to allow better integration with the enterprise ecosystem and advanced collaboration features, becoming more of a “platform” than an “application.” To further complicate the security landscape, business users are managing these SaaS applications with little to no security oversight, creating a decentralized administration model. All this is leading to a growing risk surface with complex misconfigurations that can expose organization’s identities, sensitive data, and business processes to malicious actors. 

To combat this challenge, Valence and Microsoft Security work together to ensure that SaaS applications are configured according to the best security practices and improve the security posture of identities configured in each individual SaaS application. Together, Valence and Microsoft:  

  • Centrally manage SaaS identities permissions and access.
  • Enforce strong authentication by ensuring proper MFA (multi-factor authentication) and SSO (single sign-on) enrollment and managing local SaaS users.
  • Detect and revoke unauthorized non-human SaaS identities such as APIs, service accounts, and tokens.
  • Incorporate SaaS threat detection capabilities to improve SaaS incident response.

As most of the sensitive corporate data shifted from on-prem devices to the cloud, security teams need to ensure they manage the risks of how this data is being accessed and managed. Integrating Valence’s SaaS Security with the Microsoft Security ecosystem now provides a winning solution. 

SaaS applications are prime targets  

Recent high profile breaches have shown that attackers are targeting SaaS applications and are leveraging misconfigurations and human errors to gain high privilege access to sensitive applications and data. While many organizations have implemented SSO and MFA as their main line of defense when it comes to SaaS, recent major breaches have proven otherwise. Attackers have identified that MFA fatigue, social engineering and targeting the SaaS providers themselves can bypass many of the existing mechanisms that security teams have put in place. These add to high-profile breaches where attackers leveraged legitimate third-party open authorization (OAuth) tokens to gain unauthorized access to SaaS applications, and many more attack examples. 

State of SaaS security risks 

According to our 2023 SaaS Security Report which analyzed real SaaS environments to measure their security posture before they implemented an effective SaaS security program. The results showed that every organization didn’t enforce MFA on 100% of their identities—there are some exceptions, such as service accounts, contractors, and shared accounts, or simply lack of effective monitoring of drift. In addition, one out of eight SaaS accounts are dormant and not actively used. Offboarding users is not only important to save costs, but attackers also like to target these accounts for account takeover attacks since they are typically less monitored. Other key stats were that 90% of externally shared files haven’t been used by external collaborators for at least 90 days and that every organization has granted multiple third-party vendors organization-wide access to their emails, files, and calendars. 

Figure 1. Top SaaS Security gaps identified in the 2023 State of SaaS Security Report.

Holistic SaaS security strategy 

Establishing a holistic SaaS security strategy requires to bring together many elements—from shadow SaaS discovery, through strong authentication, identity management of both humans and non-humans, managing and remediating SaaS misconfigurations, enforcing data leakage prevention policies, and finally, establishing scalable incident response. Valence and Microsoft take security teams one step further toward a more holistic approach. 

Valence joined the Microsoft Intelligence Security Association (MISA) and integrated with Microsoft security products—Microsoft Entra ID and ​​​​Microsoft Sentinel—to enhance customers’ capabilities to manage their SaaS risks, effectively remediate them, and respond to SaaS breaches. The Valence SaaS Security Platform provides insight and context on SaaS risks such as misconfigurations, identities, data shares, and SaaS-to-SaaS integrations. Extending existing controls with SaaS Security Posture Management (SSPM) capabilities and SaaS risk remediation capabilities. Valence is also a proud participant of the Partner Private Preview of Microsoft Copilot for Security. This involves working with Microsoft product teams to help shape Copilot for Security product development in several ways, including validation and refinement of new and upcoming scenarios, providing feedback on product development and operations to be incorporated into future product releases, and validation and feedback of APIs to assist with Copilot for Security’s extensibility. 

Figure 2. Illustrative data: The Valence Platform provides a single pane of glass to find and fix SaaS risk across four core use cases: data protection, SaaS to SaaS governance, identity security, and configuration management. 

Secure SaaS human and non-human identities

In the modern identity-first environment, most attackers focus on targeting high privilege users, dormant accounts, and other risks. Enforcing zero trust access has become a core strategy for many security teams. Security teams need to identify all the identities they need to secure. Microsoft Entra SSO management combined with Valence’s SaaS application monitoring—to detect accounts created—provides a holistic view into human identities and non-human (Enterprise Applications, service accounts, APIs, OAuth and 3rd party apps).  

Microsoft Entra ID centrally enforces strong authentication such as MFA and Valence discovers enforcement gaps or users that are not managed by the central SSO. Valence also monitors the SaaS applications themselves to discover the privileges granted to each identity and provides recommendations on how to enforce least privilege with minimal administrative access. To continuously validate verification based on risks, the final piece of zero trust strategy, Valence leverages the risky users and service principals signals from Microsoft Entra ID and combines them with signals from other SaaS applications for a holistic view into identity risks. 

Protect SaaS applications 

Microsoft has a wide SaaS offering that is fueling enterprise innovation. These services are central to core business functions and employee collaboration, cover many use cases, and are spread across multiple business units, but are tied together in many cases such as identity and access management, and therefore their security posture is often related as well. Managing the security posture of SaaS services can be complex because of the multiple configurations and the potential cross service effects that require security teams to build their expertise across a wide range of SaaS.  

Many security teams view SaaS apps as part of their more holistic view into SaaS security posture management and would like to create cross-SaaS security policies and enforce them. Valence’s platform integrates with Microsoft Entra ID and other SaaS services using Microsoft via Microsoft Graph to normalize the complex data sets and enable security teams to closely monitor the security posture of their SaaS applications in Microsoft alongside the rest of their SaaS environment. 

Enhance SaaS threat detection and incident response 

Improving SaaS security posture proactively reduces the chances of a breach, but unfortunately SaaS breaches can still occur, and organizations need to prepare their threat detection coverage and incident response plans. The built in human and non-human identity threat detection capabilities of Microsoft Entra ID, combined with Microsoft Sentinel log correlation and security automation, and Microsoft Copilot for Security’s advanced AI capabilities, create a powerful combination to detect and respond to threats. Valence expands existing detections from compromised endpoint and identity with important SaaS context—for example, did the compromise device belong to a SaaS admin user? Did the compromised identity perform suspicious activities in other SaaS applications? The expanded detections provide critical insights to prioritize and assess the blast radius of breaches. Additionally, Valence’s SaaS threat detection can trigger threat detection workflows in Microsoft products based on its unique indicator of compromise monitoring. 

Together, Valence and Microsoft combine the best of all worlds when it comes to SaaS security. From SaaS discovery, through SaaS security posture management, remediating risks, and detecting threats—Valence and Microsoft enable secure adoption of SaaS applications. Modern SaaS risks and security challenges require a holistic view into SaaS risk management and remediation. Get started today

About Valence Security 

Valence is a leading SaaS security company that combines SSPM and advanced remediation with business user collaboration to find and fix SaaS security risks. SaaS applications are becoming decentrally managed and more complex, which is introducing misconfiguration, identity, data, and SaaS-to-SaaS integration risks. The Valence SaaS Security Platform provides visibility and remediation capabilities for business-critical SaaS applications. With Valence, security teams can empower their business to securely adopt SaaS. Valence is backed by leading cybersecurity investors like Microsoft’s M12 and YL Ventures, and is trusted by leading organizations. Valence is available for purchase through Azure Marketplace. For more information, visit their website

Be among the first to hear about new products, capabilities, and offerings at Microsoft Secure digital event on March 13, 2024.​ Learn from industry luminaries and influencers. Register today.

Learn more

To learn more about the Microsoft Intelligent Security Association (MISA), visit our website where you can learn about the MISA program, product integrations, and find MISA members. Visit the video playlist to learn about the strength of member integrations with Microsoft products. 

​​To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post ​​Secure SaaS applications with Valence Security and Microsoft Security​​ appeared first on Microsoft Security Blog.

Categories: Microsoft

Microsoft Secure: Learn expert AI strategy at our online event

Microsoft Malware Protection Center - Mon, 03/04/2024 - 1:00pm

As the most influential technology of our lifetime, AI has the power to reshape how organizations secure their environments. AI’s impact on cybersecurity and the efforts of Microsoft to bring generative AI to organizations worldwide will be a major topic at the Microsoft Secure digital event on March 13, 2024 from 9:00 AM-1:00 AM PT. Register today to secure your spot so you can be among the first to hear the latest Microsoft Security technology innovations designed to empower cybersecurity teams. 

Microsoft Secure is a two-hour digital showcase that will focus on product announcements and immediate use cases for new capabilities. Join thousands of other cybersecurity professionals inspired by AI’s promise and eager to gain product knowledge for an advantage over bad actors.  

Watch the video for details from Microsoft Security’s Vice President, Security Marketing, Alym Rayani, on how Microsoft Secure will empower you and your cybersecurity efforts. 

Here’s a sneak preview of what you can expect at the event. 

A keynote with AI product updates across the Microsoft Security portfolio  

At Microsoft, we understand what is required to create and operate AI applications securely at scale. We understand the opportunity we have to empower everyone to develop AI that is safe and reliable. That’s why we are committed to putting secure and responsible AI solutions in the hands of security professionals everywhere—AI is transforming security. 

Hear all about our latest features and capabilities at the Microsoft Secure welcome keynote by Vasu Jakkal, Corporate Vice President, Microsoft Security Business, and Charlie Bell, Executive Vice President, Microsoft Security, along with other product leaders. They will share product innovations across the Microsoft Security portfolio and the advantages that help you address the changing threat landscape:   

  • AI for security: We’ll share exciting news about Microsoft Copilot for Security, learnings from our early access program, new features, and new ways to try the solution. Copilot for Security puts generative AI in the hands of security and IT professionals to help them supercharge their skills, collaborate more, see more, and respond faster—all informed by threat intelligence.  
  • Securing and governing AI: Explore how the features of Microsoft Purview, Microsoft Defender, and Microsoft Entra make it easier to secure and govern AI. Across the Microsoft Security portfolio, we are innovating rapidly to give our customers a new category of critical tools for securing AI that deliver greater visibility, control, and governance as you embrace generative AI.  
  • Expanded end-to-end security: Gain broad visibility and control for your digital estate with and protect your environment from every angle, across security, compliance, identity, device management, and privacy. We integrate more than 50 categories within six product families to form one end-to-end Microsoft Security solution. These product families work together, each powering the next with more context and integrated controls.
Real-world Microsoft Security applications from a customer

Don’t miss a conversation with Dow Chief Information Security Officer (CISO), Mario Ferket, on his experience using the Microsoft Security portfolio, hosted by Irina Nechaeva, General Manager, Product Marketing, Identity, and Access. Hear real-world applications from a leader at a manufacturing company on how they are leveraging AI to defend their enterprise.

Demos to practically inform your cybersecurity strategy 

After the keynote and customer story, our product experts will host three informative demo sessions to offer deeper understanding of the latest cybersecurity innovations from Microsoft.  

  • 9:30-10 AM PT: Microsoft Copilot for Security: Tailoring defense with AI—Principal Product Manager, Brandon Dixon, and Senior Director, Microsoft Security Business, Scott Woodgate, will show you Copilot for Security in action and share how to initiate Copilot and use customizable features to fit your security needs.  
  • 10-10:30 AM PT: Secure and govern AI to enable responsible adoption—Principal Product Manager, Neta Haiby, and General Manager of Data Security, Compliance, and Privacy, Herain Oberoi, will offer guidance on how to leverage built-in security and compliance controls to secure and govern your AI stack. They’ll address AI adoption challenges we see in the market such as preventing oversharing, data leaks, and misuse.   
  • 10:30-11 AM PT: Stay ahead of threats with proactive posture management—Alym Rayani, Vice President, Security Marketing, and Tomer Teller, Group Project Manager for Exposure Management, will explore how to detect, disrupt, and prevent threats in near real time with Microsoft Exposure Management solutions. Stopping cyberattacks at machine speed is crucial, but prevention is even more powerful. 
Register for Microsoft Secure today 

Register to watch the live Microsoft Secure digital event. If you can’t join us live, watch on-demand content after March 13, 2024​.  

For a more technical deep-dive into Microsoft Secure announcements, mark your calendar on April 3, 2024 for the Microsoft Secure Tech Accelerator to get live demos of how Microsoft Security products help secure your AI, and ask our product team questions.

And if you’re attending RSA Conference 2024 in San Francisco, join us for Microsoft Pre-Day, on May 5, 2024, to connect with our product experts in-person and be the first to hear even more announcements from Microsoft Security. ​ 

See you at Microsoft Secure!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post Microsoft Secure: Learn expert AI strategy at our online event appeared first on Microsoft Security Blog.

Categories: Microsoft

Defend against human-operated ransomware attacks with Microsoft Copilot for Security​​

Microsoft Malware Protection Center - Mon, 03/04/2024 - 12:00pm

Organizations everywhere are seeing an increase in human-operated ransomware threats, with Microsoft’s own telemetry showing a 200% increase in threats since September 2022.1 When an entire organization is attacked, they need every advantage they can get to protect against skilled, coordinated cyber threats. The availability of Microsoft Copilot for Security, brings SecOps teams a new tool with the power of generative AI to help outpace and outsmart threat actors. In the following demonstration videos, we take a detailed, step-by-step look at how it can help surface, contain, and mitigate a human-operated ransomware attack. 

Microsoft Copilot for Security

Powerful new capabilities, new integrations, and industry-leading generative AI.

Learn more The power of Microsoft Defender XDR with Microsoft Copilot for Security  

Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email applications, and the cloud to provide integrated protection against sophisticated ransomware threats. In this series of demonstration videos, we share real-world scenarios where Copilot is helping SecOps teams navigate threat detection, investigation, and managed response. To begin, we look at a situation where a human-operated ransomware attack has just taken place. The incident started with suspicious activity on two devices, where a credential theft tool was detected and stopped by automatic attack disruption within Microsoft Defender XDR. 

Watch the video: (Humor) Human Operate Ransomware 

Respond at the speed and scale of AI   

Bad actors can move through a system with damaging speed. And with the ever-increasing frequency and sophistication of attacks—paired with the ongoing shortage of security talent—it can be difficult for leaders to staff security teams completely. When every second counts—like during an active ransomware incident—Copilot for Security brings together critical context so security professionals can share clear, concise, and comprehensive summaries of active incidents—giving affected parties a deep understanding of the situation, even when an incident happens after business hours. With the power of AI, Copilot is helping analysts write up these incident narratives 90% faster than in the past.2  

Endpoint Security

Learn more 

In the case of this human-operated ransomware incident, Microsoft Defender for Endpoints had the first alert, detecting possible human operated malicious activity on a device. Many complex and sophisticated attacks like ransomware use scripts and tools like PowerShell and Mimikatz to access and manipulate files, tamper with system recovery settings, and delete file backups. In this incident, attackers also attempted to access Primary Refresh Tokens (PRT) and used Windows Sysinternals tools for evasion. But with line-by-line script examination in Copilot, security analysts could immediately understand what each section of code does, to quickly identify a script as malicious or benign. This Copilot capability directly helps junior security analysts “upskill” their expertise by learning the context behind the code.    

Gain critical incident context  

When faced with a complex attack, Copilot for Security can help analysts understand what’s happening quickly, so they can protect and defend their organization at machine speed and scale. In an examination of the same ransomware incident, our next demonstration video shows how the Copilot incident summary focused in on a PowerShell script, leading analysts to a critical piece of the incident puzzle.  

Watch the video: Defender Embed to Standalone Copilot  

 Without enough time and without PowerShell expertise, it could be difficult for a security analyst to fully understand the ramifications of an attack like this. But this is where Copilot can help—it quickly analyzes the PowerShell script, providing a plain English explanation of key steps within it. This helps analysts gain a full understanding of the incident and prioritize the containment and mitigation work that matters most. Copilot also works with Microsoft Defender Threat Intelligence to investigate the script hosting, determine it’s malicious and share evidence connecting the script to a known threat actor. Moving from Microsoft Defender to the stand alone Copilot experience allows analysts to connect to Microsoft Sentinel and Microsoft Intune, surfacing a key piece of information in this serious incident—a device that was noncompliant with current security policies, missing a key compliance update that may have prevented this attack. In just a few minutes, Copilot surfaced the right information to provide remediation steps and advance organizational understanding to proactively prepare for (and hopefully prevent) future attacks.   

Augment critical expertise and upskill analysts 

In our last demonstration video, we look at how security teams can utilize Copilot to stretch their skill sets, understand incidents more completely, and gain an extra hand when resources are hard to come by. 

Watch the video: User account research

Copilot for Security enables junior security analysts to complete more complex tasks with skills like natural language to Kusto Query Language translation and malicious script analysis. In this ransomware incident, analysts used Copilot to generate a PowerShell script to validate the configuration of all affected systems. By then looking at a compromised device, analysts learn the source of the compromise and discover the device wasn’t compliant because it was mis-grouped when it was first assigned. With this information and more, surfaced and organized at the speed of AI by Copilot, analysts now have a more complete understanding of how the ransomware attack happened and how it can be prevented in the future. When a single ransomware incident can turn any organization upside down, security analysts can lean on Copilot for global threat intelligence, industry best practices, and tailored insights to outpace and outsmart adversaries. 

Learn more 

Join us online at Microsoft Secure on March 13, 2024, to discover new ways to try Microsoft Copilot for Security. Experience world-class threat intelligence, end-to-end protection, and industry-leading, responsible AI through hands-on demos. And register now for our three-part webinar series “Intro to Microsoft Copilot for Security.” The first of three webinars takes place on March 19th on the basics of generative AI, followed by the second webinar on March 26th about how to get started with Copilot for Security. And lastly, the third webinar will take place on April 2nd and delves into Copilot for Security best practices. 

Learn more about how Microsoft Copilot for Security can help your team protect at the speed and scale of AI. And for more helpful tips and information, view the Copilot for Security Playlist on the Microsoft Security Channel on YouTube.  

​​To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

1Microsoft Digital Defense Report 2023 (MDDR) | Microsoft Security Insider

2 Randomized Controlled Trial for Microsoft Security Copilot, Benjamin G. Edelman, James Bono, Sida Peng, Roberto Rodriguez, Sandra Ho. November 29, 2023.

The post Defend against human-operated ransomware attacks with Microsoft Copilot for Security​​ appeared first on Microsoft Security Blog.

Categories: Microsoft

Announcing Microsoft’s open automation framework to red team generative AI Systems

Microsoft Malware Protection Center - Thu, 02/22/2024 - 12:00pm

Today we are releasing an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI), to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances. This tool, and the previous investments we have made in red teaming AI since 2019, represents our ongoing commitment to democratize securing AI for our customers, partners, and peers.   

The need for automation in AI Red Teaming

Red teaming AI systems is a complex, multistep process. Microsoft’s AI Red Team leverages a dedicated interdisciplinary group of security, adversarial machine learning, and responsible AI experts. The Red Team also leverages resources from the entire Microsoft ecosystem, including the Fairness center in Microsoft Research; AETHER, Microsoft’s cross-company initiative on AI Ethics and Effects in Engineering and Research; and the Office of Responsible AI. Our red teaming is part of our larger strategy to map AI risks, measure the identified risks, and then build scoped mitigations to minimize them.

Over the past year, we have proactively red teamed several high-value generative AI systems and models before they were released to customers. Through this journey, we found that red teaming generative AI systems is markedly different from red teaming classical AI systems or traditional software in three prominent ways.

1. Probing both security and responsible AI risks simultaneously

We first learned that while red teaming traditional software or classical AI systems mainly focuses on identifying security failures, red teaming generative AI systems includes identifying both security risk as well as responsible AI risks. Responsible AI risks, like security risks, can vary widely, ranging from generating content that includes fairness issues to producing ungrounded or inaccurate content. AI red teaming needs to explore the potential risk space of security and responsible AI failures simultaneously.

2. Generative AI is more probabilistic than traditional red teaming

Secondly, we found that red teaming generative AI systems is more probabilistic than traditional red teaming. Put differently, executing the same attack path multiple times on traditional software systems would likely yield similar results. However, generative AI systems have multiple layers of non-determinism; in other words, the same input can provide different outputs. This could be because of the app-specific logic; the generative AI model itself; the orchestrator that controls the output of the system can engage different extensibility or plugins; and even the input (which tends to be language), with small variations can provide different outputs. Unlike traditional software systems with well-defined APIs and parameters that can be examined using tools during red teaming, we learned that generative AI systems require a strategy that considers the probabilistic nature of their underlying elements.

3. Generative AI systems architecture varies widely 

Finally, the architecture of these generative AI systems varies widely: from standalone applications to integrations in existing applications to the input and output modalities, such as text, audio, images, and videos.

These three differences make a triple threat for manual red team probing. To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures. Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow.

This does not mean automation is always the solution. Manual probing, though time-consuming, is often needed for identifying potential blind spots. Automation is needed for scaling but is not a replacement for manual probing. We use automation in two ways to help the AI red team: automating our routine tasks and identifying potentially risky areas that require more attention.

In 2021, Microsoft developed and released a red team automation framework for classical machine learning systems. Although Counterfit still delivers value for traditional machine learning systems, we found that for generative AI applications, Counterfit did not meet our needs, as the underlying principles and the threat surface had changed. Because of this, we re-imagined how to help security professionals to red team AI systems in the generative AI paradigm and our new toolkit was born.

We like to acknowledge out that there have been work in the academic space to automate red teaming such as PAIR and open source projects including garak.

PyRIT for generative AI Red teaming 

PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming generative AI systems in 2022. As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful. Today, PyRIT is a reliable tool in the Microsoft AI Red Team’s arsenal.

The biggest advantage we have found so far using PyRIT is our efficiency gain. For instance, in one of our red teaming exercises on a Copilot system, we were able to pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output from the Copilot system all in the matter of hours instead of weeks.

PyRIT is not a replacement for manual red teaming of generative AI systems. Instead, it augments an AI red teamer’s existing domain expertise and automates the tedious tasks for them. PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore. The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.

However, PyRIT is more than a prompt generation tool; it changes its tactics based on the response from the generative AI system and generates the next input to the generative AI system. This automation continues until the security professional’s intended goal is achieved.

PyRIT components

Abstraction and Extensibility is built into PyRIT. That’s because we always want to be able to extend and adapt PyRIT’s capabilities to new capabilities that generative AI models engender. We achieve this by five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies and providing the system with memory.

  • Targets: PyRIT supports a variety of generative AI target formulations—be it as a web service or embedded in application. PyRIT out of the box supports text-based input and can be extended for other modalities as well. ​PyRIT supports integrating with models from Microsoft Azure OpenAI Service, Hugging Face, and Azure Machine Learning Managed Online Endpoint, effectively acting as an adaptable bot for AI red team exercises on designated targets, supporting both single and multi-turn interactions. 
  • Datasets: This is where the security professional encodes what they want the system to be probed for. It could either be a static set of malicious prompts or a dynamic prompt template. Prompt templates allow the security professionals to automatically encode multiple harm categories—security and responsible AI failures—and leverage automation to pursue harm exploration in all categories simultaneously. To get users started, our initial release includes prompts that contain well-known, publicly available jailbreaks from popular sources.
  • Extensible scoring engine:The scoring engine behind PyRIT offers two options for scoring the outputs from the target AI system: using a classical machine learning classifier or using an LLM endpoint and leveraging it for self-evaluation. Users can also use Azure AI Content filters as an API directly.  
  • Extensible attack strategy: PyRIT supports two styles of attack strategy. The first is single-turn; in other words, PyRIT sends a combination of jailbreak and harmful prompts to the AI system and scores the response. It also supports multiturn strategy, in which the system sends a combination of jailbreak and harmful prompts to the AI system, scores the response, and then responds to the AI system based on the score. While single-turn attack strategies are faster in computation time, multiturn red teaming allows for more realistic adversarial behavior and more advanced attack strategies.
  • Memory: PyRIT’s tool enables the saving of intermediate input and output interactions providing users with the capability for in-depth analysis later on. The memory feature facilitates the ability to share the conversations explored by the PyRIT agent and increases the range explored by the agents to facilitate longer turn conversations.
Get started with PyRIT

PyRIT was created in response to our belief that the sharing of AI red teaming resources across the industry raises all boats. We encourage our peers across the industry to spend time with the toolkit and see how it can be adopted for red teaming your own generative AI application.

  1. Get started with the PyRIT project here. To get acquainted with the toolkit, our initial release has a list of demos including common scenarios notebooks, including how to use PyRIT to automatically jailbreak using Lakera’s popular Gandalf game.
  2. We are hosting a webinar on PyRIT to demonstrate how to use it in red teaming generative AI systems. If you would like to see PyRIT in action, please register for our webinar in partnership with the Cloud Security Alliance.
  3. Learn more about what Microsoft’s AI Red Team is doing and explore more resources on how you can better prepare your organization for securing AI.
  4. Watch Microsoft Secure online to explore more product innovations to help you take advantage of AI safely, responsibly, and securely. 
Contributors 

Project created by Gary Lopez; Engineering: Richard Lundeen, Roman Lutz, Raja Sekhar Rao Dheekonda, Dr. Amanda Minnich; Broader involvement from Shiven Chawla, Pete Bryan, Peter Greko, Tori Westerhoff, Martin Pouliot, Bolor-Erdene Jagdagdorj, Chang Kawaguchi, Charlotte Siska, Nina Chikanov, Steph Ballard, Andrew Berkley, Forough Poursabzi, Xavier Fernandes, Dean Carignan, Kyle Jackson, Federico Zarfati, Jiayuan Huang, Chad Atalla, Dan Vann, Emily Sheng, Blake Bullwinkel, Christiano Bianchet, Keegan Hines, eric douglas, Yonatan Zunger, Christian Seifert, Ram Shankar Siva Kumar. Grateful for comments from Jonathan Spring. 

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Announcing Microsoft’s open automation framework to red team generative AI Systems appeared first on Microsoft Security Blog.

Categories: Microsoft

Get the most out of Microsoft Copilot for Security with good prompt engineering

Microsoft Malware Protection Center - Wed, 02/21/2024 - 12:00pm

The process of writing, refining, and optimizing inputs—or “prompts”—to encourage generative AI systems to create specific, high-quality outputs is called prompt engineering. It helps generative AI models organize better responses to a wide range of queries—from the simple to the highly technical. The basic rule is that good prompts equal good results.

Prompt engineering is a way to “program” generative AI models in natural language, without requiring coding experience or deep knowledge of datasets, statistics, and modeling techniques. Prompt engineers play a pivotal role in crafting queries that help generative AI models learn not just the language, but also the nuance and intent behind the query. A high-quality, thorough, and knowledgeable prompt in turn influences the quality of AI-generated content, whether it’s images, code, data summaries, or text.

Prompt engineering is important because it allows AI models to produce more accurate and relevant outputs. By creating precise and comprehensive prompts, an AI model is better able to synthesize the task it is performing and generate responses that are more useful to humans.

The benefits of prompt engineering include:

Improving the speed and efficiency of generative AI tasks, such as writing complex queries, summarizing data, and generating content.

Enhancing the skills and confidence of generative AI users—especially novices—by providing guidance and feedback in natural language.

Leveraging the power of foundation models, which are large language models built on transformer architecture and packed with information, to produce optimal outputs with few revisions.

Helping mitigate biases, confusion, and errors in generative AI outputs by fine-tuning effective prompts.

Helping bridge the gap between raw queries and meaningful AI-generated responses—and reduce the need for manual review and post-generation editing.

Why good prompts are important

Prompt engineering is a skill that can be learned and improved over time by experimenting with different prompts and observing the results. There are also tools and resources that can help people with prompt engineering, such as prompt libraries, prompt generators, or prompt evaluators.

The following examples demonstrate the importance of clarity, specificity, and context in crafting effective prompts for generative AI.

How to use prompts in security

Prompting is very important in Copilot, as it is the main way to query the generative AI system and get the desired outputs. Prompting is the process of writing, refining, and optimizing inputs—or “prompts”—to encourage Copilot for Security to create specific, high-quality outputs.

Effective prompts give Copilot for Security adequate and useful parameters to generate valuable responses. Security analysts or researchers should include the following elements when writing a prompt:

  • Goal: Specific, security-related information that you need.
  • Context: Why you need this information or how you’ll use it.
  • Expectations: Format or target audience you want the response tailored to.
  • Source: Known information, data source(s), or plugins Copilot for Security should use.

By creating precise and comprehensive prompts, Copilot for Security can better understand the task it is performing and generate responses that are more useful to humans. Prompting also helps mitigate biases, confusion, and errors in Copilot for Security outputs by fine-tuning effective prompts.

Save time with top prompts

Featured prompts are a set of predefined prompts that are designed to help you accomplish common security-related tasks with Copilot for Security. They are based on best practices and feedback from security experts and customers.

Featured prompts are a set of predefined prompts that are designed to help you accomplish common security-related tasks with Copilot for Security. They are based on best practices and feedback from security experts and customers.

You can also access the featured prompts by typing a forward slash (/) in the prompt bar and selecting the one that matches your objective. For example, you can use the featured prompt “Analyze a script or command” to get information on a suspicious script or command.

Some of the featured prompts available in Copilot for Security are:

  • Analyze a script or command: This prompt helps you analyze and interpret a command or script. It identifies the script language, the purpose of the script, the potential risks, and the recommended actions.
  • Summarize a security article: This prompt helps you summarize a security article or blog post. It extracts the main points, the key takeaways, and the implications for your organization.
  • Generate a security query: This prompt helps you generate a security query for a specific data source, such as Microsoft Sentinel, Microsoft Defender XDR, or Microsoft Azure Monitor. It converts your natural language request into a query language, such as Kusto Query Language (KQL) or Microsoft Graph API.
  • Generate a security report: This prompt helps you generate a security report for a specific audience, such as executives, managers, or analysts. It uses the information from your previous prompts and responses to create a concise and informative report.
Use promptbooks to save time

A promptbook is a collection of prompts that have been put together to accomplish a specific security-related task—such as incident investigation, threat actor profile, suspicious script analysis, or vulnerability impact assessment. You can use the existing promptbooks as templates or examples and modify them to suit your needs.

Using promptbooks in Copilot is a way to accomplish specific security-related tasks with a series of prompts that run in sequence. Each promptbook requires a specific input—such as an incident number, a threat actor name, or a script string—and then generates a response based on the input and the previous prompts. For example, the incident investigation promptbook can help you summarize an incident, assess its impact, and provide remediation steps.

Some of the promptbooks available in Copilot for Security are:

  • Incident investigation: This promptbook helps you investigate an incident by using either the Microsoft Sentinel or Microsoft Defender XDR plugin. It generates an executive report for a nontechnical audience that summarizes the investigation.
  • Threat actor profile: This promptbook helps you get an executive summary about a specific threat actor. It searches for any existing threat intelligence articles about the actor, including known tools, tactics, and procedures (TTPs) and indicators, and provides remediation suggestions.
  • Suspicious script analysis: This promptbook helps you analyze and interpret a command or script. It identifies the script language, the purpose of the script, the potential risks, and the recommended actions.
  • Vulnerability impact assessment: This promptbook helps you assess the impact of a publicly disclosed vulnerability on your organization. It provides information on the vulnerability, the affected products, the exploitation status, and the mitigation steps.

To use a promptbook, you can either type an asterisk (*) in the prompt bar and select the promptbook you want to use or select the Promptbooks button above the prompt area. Then you can provide the required input and wait for Copilot for Security to generate the response. You can also ask follow-up questions or provide feedback in the same session.

Common Copilot prompts

The following list of prompts is an excerpt of the Top 10 prompts infographic, which provides prompts utilized and recommended by customers and partners with great success. Use them to spark ideas for creating your own prompts.

Get started with prompts in Copilot

We know creating precise and comprehensive prompts produces accurate, relevant responses. By understanding the fundamentals of good prompt engineering, security analysts can improve the speed and efficiency of generative AI tasks, mitigate biases, reduce output errors, and more—all without requiring coding experience or deep knowledge of datasets, statistics, and modeling techniques. The prompt engineering best practices described here, along with featured prompts and promptbooks included in Copilot for Security, can help security teams utilize the power of generative AI to improve their workflow, focus on higher-level tasks, and minimize tedious work.

Learn more about Microsoft Copilot for Security.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Get the most out of Microsoft Copilot for Security with good prompt engineering appeared first on Microsoft Security Blog.

Categories: Microsoft

Navigating NIS2 requirements with Microsoft Security solutions

Microsoft Malware Protection Center - Tue, 02/20/2024 - 12:00pm

The Network and Information Security Directive 2 (NIS2) is a continuation and expansion of the previous European Union (EU) cybersecurity directive introduced back in 2016. With NIS2, the EU expands the original baseline of cybersecurity risk management measures and reporting obligations to include more sectors and critical organizations. The purpose of establishing a baseline of security measures for digital service providers and operators of essential services is to mitigate the risk of cyberthreats and improve the overall level of cybersecurity in the EU. It also introduces more accountability—through strengthened reporting obligations and increased sanctions or penalties. Organizations have until October 17, 2024, to improve their security posture before they’ll be legally obligated to live up to the requirements of NIS2. The broadened directive stands as a critical milestone for tech enthusiasts and professionals alike. Our team at Microsoft is excited to lead the charge in decoding and navigating this new regulation—especially its impact on compliance and how cloud technology can help organizations adapt. In this blog, we’ll share the key features of NIS2 for security professionals, how your organization can prepare, and how Microsoft Security solutions can help. And for business leaders, check out our downloadable guide for high-level insights into the people, plans, and partners that can help shape effective NIS2 compliance strategies. 

NIS2 key features 

As we take a closer look at the key features of NIS2, we see the new directive includes risk assessments, multifactor authentication, security procedures for employees with access to sensitive data, and more. NIS2 also includes requirements around supply chain security, incident management, and business recovery plans. In total, the comprehensive framework ups the bar from previous requirements to bring: 

  • Stronger requirements and more affected sectors.
  • A focus on securing business continuity—including supply chain security.
  • Improved and streamlined reporting obligations.
  • More serious repercussions—including fines and legal liability for management.
  • Localized enforcement in all EU Member States. 

Preparing for NIS2 may take considerable effort for organizations still working through digital transformation. But it doesn’t have to be overwhelming. 

NIS2 guiding principles guide

Get started on your transformation with three guiding principles for preparing for NIS2.

Download the guide today Proactive defense: The future of cloud security

At Microsoft, our approach to NIS2 readiness is a blend of technical insight, innovative strategies, and deep legal understanding. We’re dedicated to nurturing a security-first mindset—one that’s ingrained in every aspect of our operations and resonates with the tech community’s ethos. Our strategy for NIS2 compliance addresses the full range of risks associated with cloud technology. And we’re committed to ensuring that Microsoft’s cloud services set the benchmark for regulatory compliance and cybersecurity excellence in the tech world. Now more than ever, cloud technology is integral to business operations. With NIS2, organizations are facing a fresh set of security protocols, risk management strategies, and incident response tactics. Microsoft cloud security management tools are designed to tackle these challenges head-on, helping to ensure a secure digital environment for our community.  

NIS2 compliance aligns to the same Zero Trust principles addressed by Microsoft Security solutions, which can help provide a solid wall of protection against cyberthreats across any organization’s entire attack surface. If your security posture is aligned with Zero Trust, you’re well positioned to assess and help assure your organization’s compliance with NIS2. 

Figure 1. Risks associated with securing an organizations external attack surface. 

For effective cybersecurity, it takes a fully integrated approach to protection and streamlined threat investigation and response. Microsoft Security solutions provide just that, with: 

  • Microsoft Sentinel – Gain visibility and manage threats across your entire digital estate with a modern security information and event management (SIEM). 
  • Microsoft XDR – Stop attacks and coordinate response across assets with extended detection and response (XDR) built into Microsoft 365 and Azure. 
  • Microsoft Defender Threat Intelligence – Expose and eliminate modern threats using dynamic cyberthreat intelligence. 
Next steps for navigating new regulatory terrain 

The introduction of NIS2 is reshaping the cybersecurity landscape. We’re at the forefront of this transformation, equipping tech professionals—especially Chief Information Security Officers and their teams—with the knowledge and tools to excel in this new regulatory environment. To take the next step for NIS2 in your organization, download our NIS2 guiding principles guide or reach out to your Microsoft account team to learn more. 

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post Navigating NIS2 requirements with Microsoft Security solutions appeared first on Microsoft Security Blog.

Categories: Microsoft