The widespread shortage of skilled security operations and threat intelligence resources in security operations centers (SOCs) leaves many organizations open to the increased risk of a security incident. That’s because they are unable to effectively investigate all discovered, potentially malicious behaviors in their environment in a thorough and repeatable way.

According to ESG, two-thirds of security professionals believe the cybersecurity skills gap has led to an increased workload for existing staff.

“Since organizations don’t have enough people, they simply pile more work onto those that they have,” wrote ESG Senior Principal Analyst Jon Oltsik. “This leads to human error, misalignment of tasks to skills, and employee burnout.”

Security teams need to effectively prioritize and streamline workloads to focus on what’s most important first. But how can organizations quickly identify and investigate threats when they are already struggling as a result of the widespread shortage of security skills?

They face numerous challenges, including delayed remediation efforts as a result of the sheer volume of alerts and false positives; tedious and time-consuming investigation processes that involve using a variety of systems and tools to detect, investigate and escalate threats; overwhelmed and overutilized SOC analysts; ever-increasing data volumes as IT infrastructure become more diverse; and unresolved security threats.

AI Helps Streamline Threat Identification, Investigation and Remediation

An effective way to improve SOC analyst productivity and effectiveness and reduce dwell time is to leverage artificial intelligence (AI) to identify, analyze, investigate and prioritize security alerts.

AI in cybersecurity can be used as a force multiplier for security analysts by applying it directly to the investigation process. Through the application of analytics techniques, such as supervised learning, graph analytics, reasoning processes and automated data mining systems, security teams can reduce manual, error-prone research, make investigation outcome predictions (high or low priority, real or false), and identify threat actors, campaigns, related alerts and more.

A Framework to Help Bridge the Security Skills Gap

MITRE ATT&CK, a framework for understanding threat tactics, techniques and procedures based on real-world threat observations, is gaining traction as the standard for threat assessment and cybersecurity strategy. When combined with the MITRE ATT&CK framework, AI provides firsthand information about the tactics and stages of an attack potentially being used by a threat actor, adding insight and confidence to what the AI has discovered. It also speeds up response because analysts have an immediate understanding of what tactics have been adopted by bad actors. Not only does this shorten the hours of work by skilled analysts, it also ensures that all alerts are analyzed in a consistent way.

Below are some of the benefits gained by an organization that implemented an AI solution in its SOC:

  • Return on investment (ROI) of 210 percent
  • SOC analyst productivity savings of $1.8 million
  • Improved organizational security by $651,936
  • Decreased average investigation time from four hours to 10 minutes
  • Reduced total working hours SOC analysts spend on investigations from 65 percent to 15 percent

Register for the Webinar to Learn More

To learn more, download the Forrester Consulting report, “The Total Economic Impact (TEI) of IBM QRadar Advisor with Watson.”

Register for the July 23 webinar, “The Forrester TEI Report: Achieve 210% ROI by Empowering SOC Analysts With AI,” to hear more about how AI can help your organization bridge the cybersecurity skills gap from Forrester TEI Consultant Richard A. Cavallaro.

Register for the July 23 webinar

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today