Cybersecurity experts often tell us they know they should be doing more with AI, but they don’t have the time to experiment with it. It’s a vicious cycle, and developing AI agents risks becoming another task to push down the to-do list.
In this post, we’ve aimed to bring you a top-level overview of AI agents and their potential use cases within our industry.
As always when it comes to AI, we believe this technology is most powerful when humans and AI collaborate together effectively, so we’ve included our recommendations for crafting an environment where they can work together to become more than the sum of their parts.
We have also provided an overview of some key frameworks to start testing out, but this comes with our own sidenote: there are experienced cybersecurity professionals who are already experts in the world of AI agents. Working with one is a smart way to speed up your journey to using AI in a meaningful way within your organization. We have several such experts within our network, and would be happy to discuss your needs.
First, let’s clarify: what exactly are AI agents? When we talk about AI agents, we could be talking about either of the following:
The recent flurry of posts you’ve seen on AI agents? They’re usually referring to agentic workflows. The fact that they work and improve autonomously makes them a more viable option for taking over (or assisting with) human tasks.
While both types of AI agents have their uses within cybersecurity, it’s the agentic workflows that are getting leaders talking; their ‘learn and repeat’ approach makes them highly suitable for key cybersecurity tasks including:
AI agents can monitor vast amounts of data at speed, searching for those all-important anomalies. And the more feedback they receive about which anomalies are problematic, the better they become at sorting the threats from the expected.
A series of AI agents can work together to implement the various steps of an Incident Response plan, including alerting the right people and triggering the correct responses.
AI agents can be trained to hunt out vulnerabilities within your organization and alert you to the issue. The bad news? Bad actors can train them to do the same, too.
Some are heralding the development of agentic workflows as the ‘third wave of AI’. We’ve even seen a job post seeking a (human-controlled) AI agent, marking a potentially landmark shift in how we work – period.
That said, we’ve yet to hear of AI agents successfully taking over the role of a human cybersecurity professional. The general consensus is that AI agents are a useful sidekick, replacing some junior tasks with close supervision.
“Been testing AI agents for threat hunting. The good: they catch patterns humans might miss. The bad: still lots of false positives”, writes one Reddit user. “Right now they're like eager junior analysts - enthusiastic but need constant supervision. Definitely keeping an eye on this space though.”
It’s a tale as old as AI itself: getting the best out of AI agents requires careful setup and close collaboration from experienced, skilled humans. You get out what you put in.
Here are four things we’d recommend when it comes to building a space for AI agents to add real value within the workplace:
People rarely follow rigid steps to get things done. Instead, organizations rely on guardrails—rules, oversight, and controls that keep work on track. Not everyone can approve company expenses. Access to sensitive data is restricted. High-risk decisions require extra scrutiny. These safeguards don’t dictate every step but ensure that actions align with policies, regulations, and best practices.
AI agents need similar boundaries. Human-in-the-loop systems—where people review or approve AI-driven decisions—aren’t new. We already apply them in human workflows, like junior employees needing managerial sign-offs. The key is ensuring the right oversight at the right time. And when disagreements arise? Businesses must have clear escalation and resolution processes.
Trust isn’t automatic—it’s earned. We build trust in colleagues by observing their reliability over time. The same will happen with AI agents. Through interactions, they develop “memories” of past experiences, learning which sources or collaborators are trustworthy. Humans, in turn, will decide which AI outputs they trust based on consistency and accuracy.
But trust alone isn’t enough—it must be verified. AI systems should incorporate mechanisms that ensure their outputs remain reliable, much like how human work is monitored through performance reviews and audits. Organizations should track AI performance, ensuring feedback loops identify areas for improvement. And when trust breaks down? Just as human teams have escalation paths for resolving issues, companies will need protocols to manage breakdowns in AI-human trust.
When hiring for employees, the process is nuanced. Companies don’t just pick the best resume; they conduct interviews, assess culture fit, and continuously evaluate performance.
Yet AI selection today is rigid—driven by demos, feature comparisons, and price points, rather than a deeper understanding of how an AI system will behave in dynamic environments.
We believe AI selection should more closely resemble the human hiring process; we should be looking for references (where has this AI agent performed successfully before?), and also mitigating our own bias. Performance should be monitored over time (not just at the point of purchase), and ‘pay for performance’ options should become the norm (where AI agents charge based on outcomes, rather than output).
Traditional software monitoring is about uptime and error logs. AI monitoring is about growth and adaptation. The real measure of success is how well they navigate, adapt, and improve.
AI agents need feedback, and they need it in a carefully-considered format. Agents must be monitored proactively to understand how they change and develop over time.
But it’s a two-way feedback loop, too; AI agents can help us discover our own blind spots as well.
Yes, it’s important to consider the nuance of AI agents – their flaws, their limitations, their potential impact on shaping organization structures, the potential risks.
(As one Reddit user points out: “Do you want to feed someone else’s big bucket of data with live threat and vulnerability data from your organization? The agent is only going to be useful with this data and unless you have full control of it I see it as adding even more risk.”)
But what we believe is more important? Starting to experiment with what’s available – carefully, and now.
Because one thing’s for sure: bad actors will be. And AI agents are being trained to find and exploit one-day vulnerabilities as we speak.
So where should you start?
If you want to build agentic workflows, here are our recommendations for platforms to try out depending on your use case:
You don’t need us to tell you that the world of AI is moving quickly — like us, your LinkedIn is probably full of many people preaching exactly this.
If you want to understand how to build and use AI agents safely and productively within your organization, working with an AI expert is the most effective way to achieve this. Let us know what you’re looking for, and we’ll match you with a vetted cybersecurity expert within our network. Find out more.
Subscribe to receive the latest blog posts directly to your inbox every week.
By subscribing, you agree to our Privacy Policy and Terms of Service.