After a fairly low-key launch in late 2022, ChatGPT hit 100 million users in January 2023, making it the fastest-growing application in history at the time. Our collective interest in AI was well and truly piqued—but the headlines were bleak.
Many claimed this signalled the end of a whole host of job roles, and for a while, it was feared that cyber roles might be among them.
Nearly two years on, we wanted to find out how cybersecurity experts feel about the impact AI has had— and will continue to have—on their industry.
Now that the dust has somewhat settled, we’re in a position to call it how it is: AI has not yet had the groundbreaking impact on the cybersecurity industry that was first predicted.
In fact, heading into 2025, CISOs will reportedly deprioritize genAI use by 10% due to a lack of quantifiable value.
It’s supporting experts and removing manual tasks, yes—but the technology is certainly in no position to be eradicating job roles just yet.
“AI offers three business value propositions over manual or standard automation: increased execution speed, reduction in cost, and ability to achieve massive scale inference or correlation”, Phillip Miller, CISO and member of The Guild, told us. “These value drivers are of interest to those creating new products as well as those defending existing solutions.”
With a rapidly changing threat landscape and budgets bracing for uncertainty, where should you focus your attention for 2025? We asked a suite of trusted experts, from Global CISOs to VPs of Engineering, to weigh in. Download our Cybersecurity Trend Report for 2025.
The cybersecurity industry experiences the same AI ‘fear of missing out’ as any other; nobody wants their competitors skipping ahead before they have the tech figured out.
But when it comes to cybersecurity, there’s even more urgency; bad actors are adopting AI too—and fast.
As ChatGPT’s adoption rate skyrocketed at the start of 2023, Darktrace researchers reported a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace and email customers between January and February. And that was just the beginning.
For Brandon Wu, a principal cybersecurity consultant and member of The Guild, sophisticated AI-assisted attacks are at the top of the 2025 threat agenda—and these will expand beyond the smarter phishing we’ve seen so far.
“Malicious actors will learn to use AI more effectively in their operations. We are already seeing this in crafting more effective phishing attacks”, he told us.
“While we have not seen the same in effectively automating malicious software and exploit development, we will likely start to see increases in this activity in the near future. This will pose a challenge to organizations who are not operating with effective security defenses nor starting to leverage automation and AI to improve their own operations.”
Despite the growing threat, cybersecurity experts are generally optimistic about a future with AI; 85% of cybersecurity experts see AI as a net positive in terms of risk.
But one thing’s for sure: when malicious actors do pull off their sophisticated AI-assisted attacks, the results can be unbelievably devastating. Perhaps most famously, a finance worker was tricked into transferring $25 million to fraudsters after joining a video call with his Chief Financial Officer and several other prominent staff members—all of which turned out to be deepfake recreations.
Sophisticated threats require sophisticated solutions, which is something AI isn’t currently capable of delivering. So while these malicious attacks pose a very concerning threat, they’re positive news for experts in the industry; their services are more important now than ever before.
Yes, the experts we spoke to were positive about the impact of AI on their industry– but under the condition that humans remain very much at the wheel.
When humans are ultimately both the looming threat and the weakest link, you still need a human eye to catch AI’s blind spots, and unpack various nuances. As with many industries, a hybrid approach remains the best one here.
There are an estimated 67,200 artificial intelligence companies worldwide, with around 1 in 4 based in the USA. They’re dealing with a whole host of sensitive information—from recording meetings to reviewing contracts—and need to have resilient cybersecurity practices to match.
In May 2023, hackers exploited a vulnerability in ChatGPT’s open-source library, releasing security numbers, phone numbers, email addresses, and other sensitive information.
It was a moment of reckoning for the whole world. Many users had happily been experimenting with AI tools, but few had been thinking of the potential consequences of the data being fed into them. This was a mistake, and something many organizations since understood they’d need to carefully regulate going forward.
In this way, AI is also creating work within existing organizations—both to defend against more sophisticated attacks, and also in terms of managing how its employees use AI tools, and deciding which ones they’re allowed to use. 34% of organizations don’t have a generative AI policy in place, and they badly need one.
When we asked experts about the impact AI has had on the cybersecurity industry so far, they typically pointed to a reduction in manual processes and time-saving measures—be that reducing false positives, automating threat detection, data analysis, or incident response. But are we really getting the most out of what’s available to us?
For CEO Gautam Dev, it’s not AI that has underdelivered, it’s us; we’re wasting its potential by treating it only as a productivity tool.
“As of now AI is being used mostly for Search and Analytics purposes within cybersecurity products and for the sake of using it so they can say they are AI-enabled”, he told us. “However, I see its true value being realized when products start implementing and training AI for workflow automations facilitating remediation of threats/attacks in real-time.”
While it’s appropriate to be cautious of new AI technologies, cybersecurity professionals must stay curious about them; AI is unlikely to replace them any time soon, but it can absolutely give them an advantage over their competitors when used smartly.
When we start to view AI as a box to be ticked, that’s when we risk falling behind. The cybersecurity industry has never been one to stand still—and neither can we.
Sentinel Guild provides you with flexible, fractional or full-time cybersecurity professionals to plug gaps in your team, offer an extra expert perspective, or help you prepare ahead of a busy period. Each professional has been carefully pre-vetted by us, and is ready to plug straight into your business and get to work. Find out more.
Yes, it is. The United States Bureau of Labor Statistics predicts a 32% increase in cybersecurity roles between 2022 and 2032. We always take such stats with a pinch of salt, but with the threat landscape expanding as it is, it’s impossible to imagine demand for cybersecurity professionals not growing well into the future.
Very. A 2021 report found that 57% of businesses experience phishing attempts on a daily or weekly basis, a percentage which is sure to have risen in the years since. In 2023, 1.76 billion phishing emails were sent (a 51% increase on 2022).
Subscribe to receive the latest blog posts directly to your inbox every week.
By subscribing, you agree to our Privacy Policy and Terms of Service.