• About Us
  • Contact

Fighting AI-Powered Threats with AI: the Double-Edged Sword Every IT Leader Must Master

Category: News
Published: 26th November 2025

The rapid rise of artificial intelligence has created one of the most powerful technological shifts in decades, but it has also introduced a new generation of risks that every IT director must now confront.  

Data growth alone illustrates the scale of the challenge. Between 1980 and 2010, global data volumes grew from megabytes to terabytes. As we have undergone cloud transformations in the last fifteen years, that figure has expanded hugely. We now find ourselves working in Zettabytes with a transition to Ronnabytes on the horizon with the exponential increase in data brought about by the mass adoption of AI. 

With every new data set and integration, the complexity of securing that information compounds, and organisations are forced to rethink how they protect their systems at every layer as they evolve. 

This explosion in data coincides with an equally dramatic change in the cyber threat landscape. AI is enabling attackers to innovate faster than defenders. Traditional security tools simply cannot match the speed and adaptability of AI-driven cyber crime. 

CrowdStrike’s recent findings highlight just how urgent this problem has become. In 2024, the average eCrime breakout time fell to 48 minutes, down from 62 minutes the previous year, with the fastest observed breakout just under 51 seconds from initial access to lateral movement. When adversaries can move across a network in under a minute, the window for manual response is all but gone.  

Modern attackers are also becoming more subtle. One of the most concerning developments is the widespread use of legitimate remote monitoring and management tools to disguise malicious behaviour as routine administrative activity.  

In their 2025 Global Threat Report, CrowdStrike reported adversary groups are increasingly sending a large volume of spam emails impersonating charities, newsletters etc. Shortly after, a caller posing as helpdesk or IT support claims the spam is caused by malware or outdated spam filters. The user is then instructed to join a remote session using an RMM tool, with the attacker guiding them through the installation if the tool is not already present. The adversary then has complete access to the device. 

When adversaries can blend seamlessly into normal workflows like this, organisations need far more advanced detection and response capabilities to identify and contain threats. 

Traditional vs. Modern SOC workflows  

This rapidly shifting environment has prompted many organisations to rethink the structure and purpose of their Security Operations Centres. Traditional SOC models relied heavily on human operators, manual workflows, and perimeter-based monitoring. They were fundamentally reactive, built to identify suspicious activity and respond after an alert had been triggered. As AI-enhanced attacks become more common, that model is no longer sustainable. 

In recent years, many organisations have adopted a more modern approach, using automation, AI-driven analytics, and machine learning to proactively hunt for threats, contextualise alerts, and streamline investigation. However, even modern SOCs are beginning to feel the pressure of the alert volumes.  

AI not only makes attackers faster and more accurate; it also lowers the barrier to entry. CrowdStrike’s latest reports highlight how underground markets now offer Malware-as-a-Service, access brokerage and phishing toolkits, while generative AI is used to create convincing phishing campaigns and automate intrusion workflows. Together, these trends enable less skilled criminals to run campaigns that once required advanced expertise. 

The volume of attacks is rising, and internal SOC teams are increasingly facing alert overload and burnout. 

The Next Generation of SOC Workflows 

This has led industry experts to consider what a truly AI-native SOC might look like. In this model, AI systems and intelligent agents are responsible for much of the initial triage, investigation, and automated response, enabling analysts to focus on high-impact decision making.  

Some theorists believe this evolution may eventually align with early forms of AGI (artificial general intelligence), offering human-level cognitive capabilities at machine speed. Regardless of the terminology, the direction of travel is clear, the SOC of the future will be far more autonomous, continuously learning, and deeply integrated with the organisation’s technology ecosystem. 

Crucially, humans will not be replaced. They will remain accountable for approvals, ownership of major incidents, and decisions that require business context or ethical judgement. But with agentic AI handling the operational load, SOC teams will finally be able to operate at the speed required to defend an AI-enabled world.