Cyber threats are growing more intricate by the day, and the systems meant to guard against them are struggling to keep up. As cyberattacks could cost the global economy approximately $13 trillion by 2028, the need for better cybersecurity solutions has never been more urgent.
Traditional Security Operations Centers (SOCs) still rely heavily on human analysts to sift through high volumes of alerts, many of which turn out to be false alarms. Investigations are slow, talent is stretched thin, and security teams are often reacting to threats only after the damage is done.
Led by Dr. Mustafa Ghaleb, a KFUPM research team at the Interdisciplinary Research Center for Intelligent Secure Systems is addressing this challenge through a collaborative cognitive SOC that actively anticipates threats. This SOC integrates artificial intelligence and machine learning to process alerts, uncover patterns, and respond faster than any traditional setup.
After surveying the current landscape, the team outlined the limitations of existing systems. Alert fatigue, lengthy investigations, and staffing shortages topped the list. From there, they built their own SOC and began experimenting. One important development was a Large Language Model (LLM) capable of generating simulated attack scenarios along with countermeasures. By mimicking human thought processes, it offers context-aware insights that span sensor data, network activity, and application behavior. Instead of waiting for something to go wrong, it anticipates what might go wrong next. Their automated cybersecurity validation framework runs continuous tests to detect weaknesses before attackers can.
Another development was an Arabic-language LLM focused on social engineering (phishing emails). The unique model, given the limited Arabic-language datasets available, can be used to train users to recognize and avoid such phishing emails.
What sets this SOC apart is its collaborative nature that prioritizes confidentiality. Sharing threat intelligence among organizations often means compromising on data privacy. But KFUPM’s team uses federated learning and differential privacy to ensure that no organization ever exposes its raw data. Noise is added to obscure traces, ensuring that data cannot even be inferred from the model. This allows the LLM to learn from multiple real sources without any risk.
The model is also 2.7 times faster than traditional systems and requires only modest computing power, making it accessible for smaller organizations without high-end infrastructure. And with a wider range of thinking paths, it can test multiple possibilities instead of a single route. The cognitive SOC architecture model has now been packaged for one-click installation, and early results suggest it’s adaptable across industries. Patent filings are now in progress.
The team also launched “VGuard,” a multi-agent AI model trained on ethical hacking, including both offensive and defensive cyber tactics. After lab testing, the proposed solution will undergo real-world testing and deployment with the support of industry partners.
An open-source dataset is also in the works, with initial versions based on text and future iterations to include images, video, and command-line visuals. This way, LLMs can learn from real-world hacker tutorials and test cases.
In short, the team’s work rethinks how cybersecurity can function. Their model is smarter, faster, more cooperative, and more secure. And as threats evolve, this kind of thinking might just be what keeps digital systems one step ahead.
Industry, Innovation, and Infrastructure
Sustainable Cities and Communities