Editor’s Note: Cyber leaders no longer have to argue, on faith, that artificial intelligence belongs at the center of defensive operations. With the May 2026 release of “Empowering Defenders: AI for Cybersecurity” — the World Economic Forum’s white paper, produced in collaboration with KPMG — the case is being made with quantified results: $1.9 million in average breach-cost reduction, 80-day shorter breach lifecycles and operational gains across 20 partner case studies that include IBM, Accenture, Check Point Software, ING, KPMG itself and the Saudi energy giant Aramco.
For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, the report is operative on three fronts. It documents specific AI-driven detection and response patterns that now map directly to disclosure timelines under the General Data Protection Regulation, the EU’s Digital Operational Resilience Act, and the SEC’s Item 1.05 four-business-day breach reporting rule. It exposes a widening capability gap between AI-resourced enterprises and the mid-market law firms and managed service providers that support breach investigations. And it sketches an agentic-AI roadmap that is likely to push legal operations teams to revisit governance models as AI autonomy increases.
What to watch next: how regulators treat human-in-the-loop checkpoints, and which managed-service providers convert these case-study metrics into client deliverables.
Content Assessment: AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains
Information - 93%
Insight - 93%
Relevance - 92%
Objectivity - 90%
Authority - 91%
92%
Excellent
A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains."
Industry News – Artificial Intelligence Beat
AI in cybersecurity moves from promise to proof as WEF and KPMG track defender gains
ComplexDiscovery Staff
Artificial intelligence is moving from promise to proof in cybersecurity. Organizations using AI extensively in security have cut average breach costs by $1.9 million and shortened breach lifecycles by about 80 days, according to data anchoring a new World Economic Forum and KPMG white paper released Monday.
The report, “Empowering Defenders: AI for Cybersecurity,” lands at a moment when 94 percent of cyber leaders identify AI as the defining force in their field and 77 percent of organizations already deploy it operationally, figures drawn from the Forum’s Global Cybersecurity Outlook 2026. The shift it documents is concrete: AI is moving from pilot programs and pitch decks into measurable defensive performance across vulnerability detection, threat intelligence triage, phishing analysis, and incident response.
From pilot to production
“AI has the potential to shift the balance towards defenders,” said Akshay Joshi, head of the Centre for Cybersecurity at the World Economic Forum. “Organizations that treat it as a strategic capability, rather than a standalone tool, will be better placed to turn growing cyber risk into resilience and competitive advantage.”
The paper, produced in collaboration with KPMG and built on contributions from 105 representatives across 84 organizations and 15 industries, follows the Forum’s 2025 publication that warned about AI cybersecurity risks. The 2026 edition pivots from risk to deployment, examining 20 partner-submitted case studies from companies including Allianz, Aramco, Google, IBM, ING, Microsoft, Santander Group, and Standard Chartered. The metrics in those case studies are self-reported by the submitting organizations, and KPMG, which collaborated with the Forum on the report, also contributed a case study describing efficiency gains in its own threat intelligence operations.
Numbers worth knowing
The reported case-study metrics are notable, but they should be read as organization-submitted performance indicators rather than independently audited industry benchmarks.
IBM said its Autonomous Threat Operations Machine, or ATOM, launched in April 2025, handles about 95 percent of daily security investigations at the company’s managed security services arm, automating over 850 analyst hours each month and cutting end-to-end investigation time by 37 percent. Accenture deployed an AI capability called Agent Oliver across over 100,000 internet-facing sites; analysis time per site dropped from about 15 minutes to under one minute, a 93 percent reduction in manual effort. KPMG’s threat intelligence team reported a 25 percent increase in operational efficiency after introducing a custom AI model trained on its threat repository. Check Point Software’s Universe research platform compressed investigation cycles from about three weeks of manual effort to roughly one hour. Dream Group cut malware remediation guidance time by up to 95 percent.
Adversaries are applying similar AI capabilities
Attackers are using AI to conduct reconnaissance, generate malware, evade detection, and launch attacks at scale, compressing what once took weeks into minutes and lowering the technical barrier for entry-level operators, the report said. The defenders’ edge, the Forum argues, lies in proprietary internal data that attackers cannot match — context the AI can use to prioritize the risks that actually matter to a specific environment.
Those reported gains help explain why chief information security officer budgets are tilting toward AI even as governance uncertainty grows. As of the Forum’s January 2026 outlook, 53 percent of cybersecurity teams reported underfunding and 55 percent reported understaffing, citing ISACA’s State of Cybersecurity 2025. AI is being positioned as the force multiplier that closes the operational gap.
“Attackers are moving faster and at greater scale than ever before. This report is a call to action for organizations to match that pace, with AI as a force multiplier for cyber defence,” said Laurent Gobbi, partner and global head of cyber and tech risk at KPMG.
What it means for IG and eDiscovery
Adoption is uneven. Larger enterprises with greater technical maturity report higher AI-in-security usage, while small and medium businesses, governments, and non-governmental organizations lag because of financial constraints, skills gaps, and data immaturity, the report said. That split has direct consequences for mid-market law firms, regional managed service providers, and government cyber units that feed into legal-discovery and information-governance workflows. Firms that build AI-augmented incident response into their service catalog will compete differently for breach work; those that do not will face client and request-for-proposal pressure as enterprises raise the bar for managed cyber services.
For information governance and eDiscovery professionals, the deployment patterns documented in the paper map directly onto compliance obligations. ING said its machine learning data leakage prevention pipeline has processed 5 million alerts and lifted analyst precision by 20 percent across over 60,000 employees — throughput that maps to the General Data Protection Regulation, the EU’s Digital Operational Resilience Act and the U.S. Securities and Exchange Commission’s Item 1.05 four-business-day breach disclosure window without expanding headcount. Cybervergent’s agentic AI monitors source code exfiltration and “shadow AI” leaks of proprietary content into public model training sets, a vector that touches trade-secret protection and litigation-hold integrity. Across multiple case studies, AI tools generate audit-ready documentation, traceable evidence trails and standardized reporting — outputs that information governance and legal-operations teams need for chain-of-custody integrity in regulatory investigations.
Reliance risk and the agentic AI horizon
The report is candid about reliance risk. “Heavy reliance on AI can undermine cyber resilience,” the authors said, recommending that security teams combine AI with human judgment, simulate AI failures, and design fail-safes that keep operations functional during AI outages. Talent gaps remain a structural drag: 54 percent of organizations identify a shortage of skilled talent as the primary barrier to AI adoption, with 76 percent of cybersecurity professionals reporting exhaustion in 2025, citing Sophos research.
About 88 percent of enterprises are actively investing in AI agents, the report said, citing KPMG’s Global Tech Report 2026, and Gartner forecasts that by 2028, about 15 percent of day-to-day work decisions will be made autonomously by AI agents. The Forum sketches a four-level autonomy spectrum — from AI that summarizes alerts under full human oversight to “human-out-of-the-loop” agents that autonomously coordinate distributed denial-of-service mitigation, with supervisor agents validating actions against security policy. The choice between levels, the report said, hinges on the reversibility and risk of the action: high autonomy for low-stakes reversible decisions, human-in-the-loop for actions with lasting consequences.
That governance bar is where information governance, privacy, and legal operations teams enter the picture directly. Agentic AI introduces an expanded attack surface, the potential for unintended cascading behaviors across multi-agent environments, and governance gaps where agents are deployed without approval, the authors said. The Forum points to its companion paper, “AI Agents in Action: Foundations for Evaluation and Governance,” as a controls reference.
The takeaway
For cyber, information governance, and eDiscovery leaders, the next steps are concrete. Build a clear AI strategy, validate use cases through structured pilots with go/no-go criteria, and choose a build, buy, or hybrid model based on whether the capability is a strategic differentiator or a commodity utility. Scale only what demonstrates measurable benefit, and ensure the governance perimeter — including human-in-the-loop checkpoints — keeps pace with the autonomy granted to the system.
How will your organization decide where AI takes the wheel — and where a human stays in the loop?
News sources
- New Report Shows How AI Gives Cybersecurity Competitive Advantage (World Economic Forum)
- Empowering Defenders: AI for Cybersecurity (white paper) (World Economic Forum / KPMG)
- Global Cybersecurity Outlook 2026 (World Economic Forum)
- Cost of a Data Breach Report 2025 (IBM)
- Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards (2025) (World Economic Forum)
- IBM Delivers Autonomous Security Operations with Cutting-Edge Agentic AI (IBM Newsroom)
- State of Cybersecurity 2025 Report (ISACA)
- The Human Cost of Vigilance: Addressing Cybersecurity Burnout in 2025 (Sophos)
- KPMG Global Tech Report 2026 (KPMG)
- Gartner Says That in the Age of GenAI, Preemptive Capabilities — Not Detection and Response — Are the Future of Cybersecurity (Gartner)
- Regulation (EU) 2016/679 — General Data Protection Regulation (GDPR) (EUR-Lex)
- Regulation (EU) 2022/2554 — Digital Operational Resilience Act (DORA) (EUR-Lex)
- SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure (Item 1.05) (U.S. Securities and Exchange Commission)
- Cyber Frontiers: AI & Cyber initiative (World Economic Forum)
Assisted by GAI and LLM Technologies
Additional reading
- China’s Meta-Manus block adds new risk layer to cross-border AI diligence
- Stakeholder governance gets a stricter audit
- Andrew Haslam’s eDisclosure Systems Buyers Guide at 14: What the 1H 2026 update reveals
- A Complete Analysis of the Winter 2026 eDiscovery Pricing Survey
- The M&A Risk of Confusing Market Velocity with Marketing Capability
- Confidence Meets Complexity: Full Results from the 2H 2025 eDiscovery Business Confidence Survey
- Making the Subjective Objective: A Scoring Framework for Evaluating eDiscovery Vendor Viability in 2026
- eDiscovery Vendor Viability Scoring Tool: Making the Subjective Objective
- Beyond Public Cloud: The Enduring Case for Deployment Flexibility in eDiscovery
Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.


























