Editor’s Note: The 2026 International AI Safety Report deserves close attention from cybersecurity, information governance, and eDiscovery professionals for one reason above all others: it quantifies what many in these fields have sensed but lacked hard data to confirm. The report documents that AI systems now discover 77% of software vulnerabilities in competitive settings, that identity-based attacks rose 32% in the first half of 2025, and that data exfiltration volumes for major ransomware families surged nearly 93%.

For information governance teams grappling with retention policies, classification frameworks, and cross-border data transfers, these figures translate into higher breach risk and tighter regulatory scrutiny. For eDiscovery practitioners, AI-generated content—deepfakes, synthetic documents, and polymorphic malware artifacts—introduces authenticity challenges that existing forensic workflows were never designed to handle.

And for cybersecurity leaders, the report’s central observation that risk mitigation is outpaced by capability advancement frames the strategic reality for the rest of this decade. At 221 pages and backed by over 100 experts from 30 countries, this report is the most authoritative global evidence base available. Professionals who ignore its findings risk building defenses around yesterday’s threat model.


Content Assessment: 2026 AI Safety Report Flags Escalating Threats for Cyber, IG, and eDiscovery Professionals

Information - 94%
Insight - 94%
Relevance - 92%
Objectivity - 94%
Authority - 95%

94%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "2026 AI Safety Report Flags Escalating Threats for Cyber, IG, and eDiscovery Professionals."


Industry News – Artificial Intelligence Beat

2026 AI Safety Report Flags Escalating Threats for Cyber, IG, and eDiscovery Professionals

ComplexDiscovery Staff

The 2026 International AI Safety Report reveals escalating cybersecurity threats, deepening governance gaps, and an offense-defense imbalance that keeps tilting toward attackers.

Criminal groups and state-backed hackers are already weaponizing artificial intelligence against corporate networks, government agencies, and critical infrastructure—and the global defense apparatus has not kept pace. That is the core warning of the 2026 International AI Safety Report, a 221-page assessment released in early February by a writing team of over 100 independent AI experts from more than 30 countries, chaired by Turing Award winner Yoshua Bengio.

The report, backed by an Expert Advisory Panel with nominees from the European Union, the Organisation for Economic Co-operation and Development, and the United Nations, lands at a moment when cybersecurity professionals, information governance teams, and eDiscovery practitioners are confronting threats that their existing toolsets were never designed to address. Unlike prior international AI assessments, this edition narrows its focus to what it calls “emerging risks”—the dangers born at the cutting edge of general-purpose AI capabilities—and it names cybersecurity as one of the domains where evidence of real-world harm is now strongest.

The findings will inform discussions later this month at the India AI Impact Summit 2026 in New Delhi, the first global AI summit hosted in the Global South, where heads of state, tech CEOs, and policymakers from over 100 countries are expected to debate what comes next.

AI on Both Sides of the Firewall

The report documents a sharp acceleration in AI’s role across the cyberattack chain. In one premier cyber competition—the final phase of DARPA’s AI Cyber Challenge—an AI agent autonomously identified 77% of the software vulnerabilities introduced by organizers, placing it in the top 5% of more than 400 mostly human teams. Google’s Big Sleep AI agent was used to identify a critical memory corruption vulnerability in a database engine used in many real-world deployments. Underground marketplaces now sell pre-packaged AI tools and AI-generated ransomware that lower the skill threshold for conducting attacks, making once-sophisticated operations accessible to less capable actors.

Security analyses conducted by AI developers confirm that threat groups associated with nation-states are actively using AI to enhance cyber capabilities. These actors have employed AI systems to analyze disclosed vulnerabilities, develop evasion techniques, and write code for hacking tools. One AI developer reported in November 2025 that a threat actor used its models to automate 80 to 90 percent of the effort involved in an intrusion, with human involvement limited to critical decision points. Anthropic separately identified what it described as a sophisticated espionage campaign, believed to be orchestrated by a Chinese state-sponsored group, that manipulated its Claude Code tool to target large technology companies, financial institutions, chemical manufacturers, and government agencies.

The numbers paint a grim picture beyond the anecdotal. Identity-based attacks rose 32% in the first half of 2025. Data exfiltration volumes for ten major ransomware families increased 92.7% year over year. Ransomware attacks against industrial organizations jumped 87% over the prior year. Phishing and social engineering attacks saw a sharp increase in 2024 and continued climbing, driven in part by adversaries’ increasing use of generative AI.

The Offense-Defense Imbalance

For cybersecurity professionals reading the report, one section deserves particular scrutiny: the offense-defense balance. The same AI capabilities that allow an attacker to rapidly discover vulnerabilities can also be used by a defender to find and patch them first. AI companies have announced security agents designed to proactively identify and fix software weaknesses, and researchers have proposed using AI to rewrite large codebases for greater security.

But here is the problem the report identifies: defenders face barriers that attackers do not. The absence of standardized quality-assurance methods for AI security tools makes it difficult for defenders to adopt them in critical sectors where reliability is non-negotiable. Attackers, operating without such constraints, can move faster. Open-weight AI models compound the challenge. While they offer research and commercial benefits, particularly for lesser-resourced organizations, they cannot be recalled once released, their safeguards are easier to remove, and malicious actors can use them entirely offline and beyond any provider oversight.

Professionals should treat this asymmetry as a planning assumption. Organizations that invest in AI-augmented defense tools today—automated vulnerability scanning, AI-driven threat detection, and machine-speed incident response—will be better positioned than those relying on legacy detection methods. But any deployment of AI defense tools must include rigorous validation, because an AI system embedded in an organization’s cyber defenses that has been compromised through techniques such as prompt injection, data poisoning, or tampering could leave the organization even more exposed.

What This Means for Information Governance and eDiscovery

The implications extend well beyond the security operations center. For information governance professionals, the report’s findings on AI-generated content and data manipulation present challenges that ripple through retention schedules, classification frameworks, and regulatory compliance programs. When malware can contact an AI service mid-execution to dynamically alter its behavior—a capability the report confirms has been observed in the wild—traditional forensic signatures become unreliable. When deepfake tools create realistic AI-generated videos that can bypass identity verification procedures, the chain-of-custody assumptions underlying many governance workflows come into question.

For eDiscovery practitioners, the landscape is shifting in ways that demand immediate attention. The report notes that AI-generated content can be as effective as human-written content at changing people’s beliefs, that synthetic identities are being used to infiltrate organizations, and that AI agents are operating with progressively less human oversight. Each of these developments introduces new categories of electronically stored information that existing review protocols may not adequately handle. Professionals should begin updating their defensible collection procedures to account for AI-generated artifacts, establish authentication standards for digital evidence, and document the provenance of materials gathered during investigations.

The regulatory environment is also catching up. The report notes that since the last edition, new instruments such as the EU’s General-Purpose AI Code of Practice, China’s AI Safety Governance Framework 2.0, and the G7’s Hiroshima AI Process Reporting Framework represent an early trend toward more standardized approaches to transparency, evaluation, and incident reporting. In 2025, twelve companies published or updated their Frontier AI Safety Frameworks—documents describing how they plan to manage risks as they build more capable models. While most of these remain voluntary, a few jurisdictions are beginning to formalize risk management practices as legal requirements. Governance teams should closely track these developments, as they will shape compliance obligations and defensibility standards in the years ahead.

Safeguards Are Improving—But Not Fast Enough

The report does not suggest that defenders are powerless. Technical safeguards have improved: attacks designed to elicit harmful outputs from AI systems have become more difficult to execute, and model providers are deploying specialized classifiers to identify and block malicious use patterns before they can cause damage. Defense-in-depth—layering multiple safeguards rather than relying on any single one—emerges as the most resilient strategy, even though each individual layer has known weaknesses.

Building societal resilience is also part of the prescription. The report calls for strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats. Organizations should conduct tabletop exercises that simulate AI-enabled attack scenarios, review their incident response plans for gaps related to synthetic content and autonomous AI agents, and ensure that their legal and compliance teams understand the evidentiary challenges that AI-generated materials present.

Yet the report’s own assessment is candid about the limits of these measures. Reliable pre-deployment safety testing has become harder to conduct because it has become more common for AI models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. Yoshua Bengio, in an interview accompanying the report’s release, did not mince words. The pace of advances, he said, is still much greater than the pace of progress in managing and mitigating those risks. That, he added, puts the ball in policymakers’ hands.

A Closing Thought

The 2026 International AI Safety Report is not a call for alarm so much as a call for recalibration. The professionals who safeguard corporate data, manage regulatory compliance, and handle electronic evidence are on the front lines of an asymmetric contest they did not choose. The tools arrayed against them are growing more capable by the quarter. The question is not whether AI will reshape the cybersecurity, information governance, and eDiscovery landscape—it already has. The question worth asking is this: if the gap between AI capability advancement and risk mitigation continues to widen at the pace documented in this report, what does a defensible security and governance posture look like in 2030?

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.