June 26, 2019 By Kacy Zurkus 3 min read

Traditionally, information security has been about protecting the network against external threats. As innovation and the cloud have slowly chipped away at the perimeter, however, organizations have become challenged to defend against not only nefarious actors from the outside, but also malicious insiders within the company’s digital walls.

“With the evolution of modern techniques and exploitation of the end user, we are on the cusp of a new world where most threats resemble or leverage the insider one way or another, willingly or unwillingly,” said Adrian Peters, board member of the Internet Security Alliance. According to Peters, because convergence is driving security to the point where we need to focus on the data and entitlements, practitioners should be thinking about what that really means as cloud adoption within data centers and via external providers continues to increase.

Enter cybersecurity artificial intelligence (AI). In an interview with Information Security Media Group, Senseon Founder and CEO David Atkinson defined AI as “the aspiration to build machines that could emulate what we do as people.” The irony is that people make mistakes. To err is human. So how can we use AI to mitigate the risks that come directly from the poor cyber hygiene of human beings?

Knock Knock. Who’s There on the Network?

Determining who is on the network is a matter of critical importance in enterprise security. Given the number of passwords that have been leaked in data breaches, it’s increasingly more important for employees to use secure passwords. Unfortunately, many users haven’t fully adopted good password habits; they continue to reuse the same password across multiple accounts. Often, those passwords are weak, making it easier for an attacker to make an educated guess and gain access to the network.

Organizations first need to determine who their people are and what each of those individual users needs to access within the organization. Then, figure out how to deliver those processes in a quality, secure way — or, as Peters put it, “We now need to think from the inside out.” Cybersecurity AI offers significant progress in enabling this inside-out approach to authentication and identity management.

“AI is starting to give us the capability to establish user behavior, user patterns and why they are doing what they are doing,” Peters said.

Where Has All the Data Gone?

Another example of poor cyber hygiene is when organizations collect and store data endlessly — when there is no end to the data collected and no destruction of the data that is no longer serving a purpose. This lack of policy over the complete life cycle of data poses security risks to the organization in the event of a cyberattack. The use of cybersecurity AI gives a broader view of technology assets and identities.

“Through the output of a lot of the data tools, AI can now determine why certain pieces of data are labeled differently and trigger a certification or remediation,” noted Peters.

As a result, security teams can then look at why certain aspects of data are no longer being accessed and make more informed decisions about whether they are going to certify the confidentiality or integrity of that data.

The Challenges of Applying Cybersecurity AI

Of course, while AI is a very powerful tool to defend against cyberattacks, Atkinson noted that there are limitations to what AI can and cannot do. Applying AI is the application of complex mathematics to a complex and ever-changing data set. It has its challenges.

“Sorry to break this to you,” Atkinson said, “but people are weird and technology is weird. At the same time in enterprises, you have good attackers trying to behave normally. It takes a great deal of talent and a lot of specific engineering, but it’s a problem worth solving.”

By building AI models around the life cycle of the user, security teams can start to outline patterns for normal behavior and detect patterns that seem abnormal, but it’s indeed a learning process.

“If leaders of the organization are not thinking holistically yet, AI can be a beneficial tool that enables the security team to establish outliers around users and systems that are not following a pattern,” Peters said. AI has the ability to identify when users haven’t logged in or haven’t been on a network, but there is a process to building out these models, which takes time.

If You Build It, AI Can Help

No technology is perfect, but the use of AI and machine learning capabilities does help mitigate the risk of insider threats. In most cases, users aren’t acting maliciously, which is why building the models is critical to mitigating the risk of both insider and outsider threats.

Through the use of machine learning algorithms, security teams can see when a user’s risk posture spikes, then look to see whether there are additional abnormal activities, such as exfiltration of data, that would require further investigation to see if the user is acting maliciously or the account has been compromised.

By understanding that human behavior doesn’t really change, and will likely never evolve as rapidly as technology, organizations can consider the security solutions that will not only protect them from motivated attackers, but also mitigate the risks from human error. As AI technologies continue to evolve, they will be even more useful in authenticating users and identifying potential cyberattacks from the inside out.

More from Artificial Intelligence

What should an AI ethics governance framework look like?

4 min read - While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.AI is also high on the list of United States government concerns.…

GenAI: The next frontier in AI security threats

3 min read - Threat actors aren’t attacking generative AI (GenAI) at scale yet, but these AI security threats are coming. That prediction comes from the 2024 X-Force Threat Intelligence Index. Here’s a review of the threat intelligence types underpinning that report.Cyber criminals are shifting focusIncreased chatter in illicit markets and dark web forums is a sign of interest. X-Force hasn’t seen any AI-engineered campaigns yet. However, cyber criminals are actively exploring the topic. In 2023, X-Force found the terms “AI” and “GPT” mentioned…

How AI can be hacked with prompt injection: NIST report

3 min read - The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI. In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today