Editor’s Note: Cyber leaders no longer have to argue, on faith, that artificial intelligence belongs at the center of defensive operations. With the May 2026 release of “Empowering Defenders: AI for Cybersecurity” — the World Economic Forum’s white paper, produced in collaboration with KPMG — the case is being made with quantified results: $1.9 million in average breach-cost reduction, 80-day shorter breach lifecycles and operational gains across 20 partner case studies that include IBM, Accenture, Check Point Software, ING, KPMG itself and the Saudi energy giant Aramco.

For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, the report is operative on three fronts. It documents specific AI-driven detection and response patterns that now map directly to disclosure timelines under the General Data Protection Regulation, the EU’s Digital Operational Resilience Act, and the SEC’s Item 1.05 four-business-day breach reporting rule. It exposes a widening capability gap between AI-resourced enterprises and the mid-market law firms and managed service providers that support breach investigations. And it sketches an agentic-AI roadmap that is likely to push legal operations teams to revisit governance models as AI autonomy increases.

What to watch next: how regulators treat human-in-the-loop checkpoints, and which managed-service providers convert these case-study metrics into client deliverables.


Content Assessment: AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains

Information - 93%
Insight - 93%
Relevance - 92%
Objectivity - 90%
Authority - 91%

92%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains."


Industry News – Artificial Intelligence Beat

AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains

ComplexDiscovery Staff

Artificial intelligence is moving from promise to proof in cybersecurity. Organizations using AI extensively in security have cut average breach costs by $1.9 million and shortened breach lifecycles by about 80 days, according to data anchoring a new World Economic Forum and KPMG white paper released Monday.

The report, “Empowering Defenders: AI for Cybersecurity,” lands at a moment when 94 percent of cyber leaders identify AI as the defining force in their field and 77 percent of organizations already deploy it operationally, figures drawn from the Forum’s Global Cybersecurity Outlook 2026. The shift it documents is concrete: AI is moving from pilot programs and pitch decks into measurable defensive performance across vulnerability detection, threat intelligence triage, phishing analysis, and incident response.

From pilot to production

“AI has the potential to shift the balance towards defenders,” said Akshay Joshi, head of the Centre for Cybersecurity at the World Economic Forum. “Organizations that treat it as a strategic capability, rather than a standalone tool, will be better placed to turn growing cyber risk into resilience and competitive advantage.”

The paper, produced in collaboration with KPMG and built on contributions from 105 representatives across 84 organizations and 15 industries, follows the Forum’s 2025 publication that warned about AI cybersecurity risks. The 2026 edition pivots from risk to deployment, examining 20 partner-submitted case studies from companies including Allianz, Aramco, Google, IBM, ING, Microsoft, Santander Group, and Standard Chartered. The metrics in those case studies are self-reported by the submitting organizations, and KPMG, which collaborated with the Forum on the report, also contributed a case study describing efficiency gains in its own threat intelligence operations.

Numbers worth knowing

The reported case-study metrics are notable, but they should be read as organization-submitted performance indicators rather than independently audited industry benchmarks.

IBM said its Autonomous Threat Operations Machine, or ATOM, launched in April 2025, handles about 95 percent of daily security investigations at the company’s managed security services arm, automating over 850 analyst hours each month and cutting end-to-end investigation time by 37 percent. Accenture deployed an AI capability called Agent Oliver across over 100,000 internet-facing sites; analysis time per site dropped from about 15 minutes to under one minute, a 93 percent reduction in manual effort. KPMG’s threat intelligence team reported a 25 percent increase in operational efficiency after introducing a custom AI model trained on its threat repository. Check Point Software’s Universe research platform compressed investigation cycles from about three weeks of manual effort to roughly one hour. Dream Group cut malware remediation guidance time by up to 95 percent.

Adversaries are applying similar AI capabilities

Attackers are using AI to conduct reconnaissance, generate malware, evade detection, and launch attacks at scale, compressing what once took weeks into minutes and lowering the technical barrier for entry-level operators, the report said. The defenders’ edge, the Forum argues, lies in proprietary internal data that attackers cannot match — context the AI can use to prioritize the risks that actually matter to a specific environment.

Those reported gains help explain why chief information security officer budgets are tilting toward AI even as governance uncertainty grows. As of the Forum’s January 2026 outlook, 53 percent of cybersecurity teams reported underfunding and 55 percent reported understaffing, citing ISACA’s State of Cybersecurity 2025. AI is being positioned as the force multiplier that closes the operational gap.

“Attackers are moving faster and at greater scale than ever before. This report is a call to action for organizations to match that pace, with AI as a force multiplier for cyber defence,” said Laurent Gobbi, partner and global head of cyber and tech risk at KPMG.

What it means for IG and eDiscovery

Adoption is uneven. Larger enterprises with greater technical maturity report higher AI-in-security usage, while small and medium businesses, governments, and non-governmental organizations lag because of financial constraints, skills gaps, and data immaturity, the report said. That split has direct consequences for mid-market law firms, regional managed service providers, and government cyber units that feed into legal-discovery and information-governance workflows. Firms that build AI-augmented incident response into their service catalog will compete differently for breach work; those that do not will face client and request-for-proposal pressure as enterprises raise the bar for managed cyber services.

For information governance and eDiscovery professionals, the deployment patterns documented in the paper map directly onto compliance obligations. ING said its machine learning data leakage prevention pipeline has processed 5 million alerts and lifted analyst precision by 20 percent across over 60,000 employees — throughput that maps to the General Data Protection Regulation, the EU’s Digital Operational Resilience Act and the U.S. Securities and Exchange Commission’s Item 1.05 four-business-day breach disclosure window without expanding headcount. Cybervergent’s agentic AI monitors source code exfiltration and “shadow AI” leaks of proprietary content into public model training sets, a vector that touches trade-secret protection and litigation-hold integrity. Across multiple case studies, AI tools generate audit-ready documentation, traceable evidence trails and standardized reporting — outputs that information governance and legal-operations teams need for chain-of-custody integrity in regulatory investigations.

Reliance risk and the agentic AI horizon

The report is candid about reliance risk. “Heavy reliance on AI can undermine cyber resilience,” the authors said, recommending that security teams combine AI with human judgment, simulate AI failures, and design fail-safes that keep operations functional during AI outages. Talent gaps remain a structural drag: 54 percent of organizations identify a shortage of skilled talent as the primary barrier to AI adoption, with 76 percent of cybersecurity professionals reporting exhaustion in 2025, citing Sophos research.

About 88 percent of enterprises are actively investing in AI agents, the report said, citing KPMG’s Global Tech Report 2026, and Gartner forecasts that by 2028, about 15 percent of day-to-day work decisions will be made autonomously by AI agents. The Forum sketches a four-level autonomy spectrum — from AI that summarizes alerts under full human oversight to “human-out-of-the-loop” agents that autonomously coordinate distributed denial-of-service mitigation, with supervisor agents validating actions against security policy. The choice between levels, the report said, hinges on the reversibility and risk of the action: high autonomy for low-stakes reversible decisions, human-in-the-loop for actions with lasting consequences.

That governance bar is where information governance, privacy, and legal operations teams enter the picture directly. Agentic AI introduces an expanded attack surface, the potential for unintended cascading behaviors across multi-agent environments, and governance gaps where agents are deployed without approval, the authors said. The Forum points to its companion paper, “AI Agents in Action: Foundations for Evaluation and Governance,” as a controls reference.

The takeaway

For cyber, information governance, and eDiscovery leaders, the next steps are concrete. Build a clear AI strategy, validate use cases through structured pilots with go/no-go criteria, and choose a build, buy, or hybrid model based on whether the capability is a strategic differentiator or a commodity utility. Scale only what demonstrates measurable benefit, and ensure the governance perimeter — including human-in-the-loop checkpoints — keeps pace with the autonomy granted to the system.

How will your organization decide where AI takes the wheel — and where a human stays in the loop?

News sources



Assisted by GAI and LLM Technologies

Additional reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.