Editor’s Note: A widening gap between AI investment and measurable productivity gains is forcing a reckoning across enterprise technology — and the legal industry is absorbing that correction with particular intensity. Law firms accelerated technology spending at record rates in 2025 while survey after survey showed clients receiving little tangible benefit. In eDiscovery, AI review tools have driven per-document costs down sharply, yet the oversight, quality control, and billing structures around those tools remain underdeveloped.

Cybersecurity, data privacy, and regulatory compliance professionals should track this dynamic closely. AI-driven legal workflows that prioritize speed over thoroughness create downstream data governance risks: privilege calls without adequate human review, automated classification decisions that lack defensible audit trails, and disposition workflows that may not satisfy jurisdiction-specific requirements. The EU AI Act’s August 2026 compliance deadline for high-risk AI systems adds a regulatory cost layer that most organizations have not yet budgeted for.

Watch for whether the legal industry’s billing model tension — firms pocketing AI efficiency gains while raising hourly rates — triggers a broader shift toward alternative fee arrangements and transparent net-productivity metrics. The organizations that demand those metrics first will set the terms for how AI is valued across the legal services ecosystem.


Content Assessment: We Wanted Smarter Legal Tech, but Instead Got an Expensive Dependency

Information - 94%
Insight - 93%
Relevance - 92%
Objectivity - 94%
Authority - 92%

93%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "We Wanted Smarter Legal Tech, but Instead Got an Expensive Dependency."


Industry News – Artificial Intelligence Beat

We Wanted Smarter Legal Tech, but Instead Got an Expensive Dependency

ComplexDiscovery Staff

The legal industry poured billions into artificial intelligence with a seductive promise: faster reviews, leaner operations, sharper insights. What it got, increasingly, looks like the same old work wearing a new interface — and a steeper invoice to match.

That observation stings because it should not be surprising. Across the enterprise technology landscape, the gap between what AI vendors promised and what organizations have actually received is widening into a chasm that even the most optimistic chief technology officers cannot ignore. Forrester’s 2026 predictions put it bluntly: enterprises will defer 25 percent of their planned AI spending into 2027, as financial rigor catches up with the hype. Only 15 percent of AI decision-makers reported any measurable lift to their organization’s EBITDA over the preceding 12 months, according to the same research. Fewer than one in three could tie AI’s value to changes on their profit-and-loss statement.

PwC’s 29th Global CEO Survey, released in January 2026, delivered an even starker verdict. Fifty-six percent of CEOs worldwide — across 4,454 respondents in 95 countries — said their companies had realized neither revenue gains nor cost reductions from AI investments. Just one in eight reported achieving both. PwC Global Chairman Mohamed Kande attributed the shortfall to organizations chasing AI deployment while neglecting foundational work: data infrastructure, process redesign, and governance frameworks. The unsexy plumbing that determines whether any technology actually delivers results.

The legal industry sits squarely inside this reckoning. According to the 2026 Report on the State of the US Legal Market from Thomson Reuters and Georgetown Law’s Center on Ethics and the Legal Profession, law firms increased technology spending by 9.7 percent and knowledge management spending by 10.5 percent — growth rates the report described as likely the fastest the legal industry has ever experienced. Firms scrambled to deploy generative AI capabilities while simultaneously managing a 2.5 percent increase in billable hours. The money flowed. Whether the returns followed are a different question entirely.

Here is the uncomfortable math. While Clio’s data shows AI adoption among legal professionals surged from 19 percent to 79 percent between 2023 and 2024, the 2025 Legal Trends Report revealed that the figure flatlined at the same 79 percent — a plateau that signals the transition from adoption to productive implementation has stalled against the friction of legacy billing models and outdated data infrastructure. Meanwhile, the share of legal professionals using legal-specific AI tools actually dropped from 58 percent to 40 percent, suggesting much of the industry’s AI activity involves general-purpose tools rather than purpose-built legal technology. Axiom’s 2026 In-House Legal Budgeting Survey, conducted by The Harris Poll across 530 senior legal decision-makers, found that 78 percent of legal departments have been mandated to implement AI without dedicated funding — creating an unfunded mandate that undermines the careful integration AI requires. And even where AI is deployed, only 6 percent of law firms pass efficiency gains to clients through reduced fees, while 34 percent actually charge premium rates for AI-enhanced work, according to Axiom’s separate research on general counsel. A joint survey from the Association of Corporate Counsel and Everlaw, drawing on 657 in-house professionals across 30 countries, sharpened the picture: nearly 60 percent of in-house counsel reported no noticeable savings from their outside counsel’s use of AI. Fifty-eight percent pointed to a deeper structural issue — law firms have not adjusted their pricing to reflect generative AI-driven efficiencies.

This is the legal industry’s AI paradox. Firms deploy technology capable of completing in minutes what once took hours of associate time — and then try to bill for it by the hour anyway. Everlaw’s 2025 eDiscovery Innovation Report found that nearly half of legal professionals reclaim one to five hours per week through generative AI — time savings that, across a year, amount to over 30 working days. Yet 90 percent of respondents in the same survey said that AI has either already altered conventional billing practices or will do so within two years, an acknowledgment that the billing model has not kept pace with the technology. Ninety percent of legal spending still flows through standard hourly rate arrangements, according to the Georgetown report, creating a structural tension so acute that the report itself called it “almost absurd.” The efficiency gains exist in a vacuum. They accrue to firm profitability, not to the clients who ultimately fund the technology through rising rates.

Where the Gains Are Real — and Where They Aren’t

The eDiscovery sector illustrates both the promise and the trap. Per-document AI review costs have dropped to between $0.11 and $0.50, down from the $1.50 to $3.00 that human reviewers commanded as recently as two years ago, according to eDiscovery industry pricing surveys. Relativity reported that its aiR product line — adopted by hundreds of customers across over 2,000 projects, with over 190 million review decisions as of early 2026 — has delivered time savings of 50 to 70 percent in certain review and data breach response workflows. In October 2025, Relativity announced it would fold its aiR for Review and aiR for Privilege generative AI tools into the standard RelativityOne package starting in early 2026, a move that effectively commoditizes a capability vendors have been pricing as premium.

Those are real gains in specific, well-defined tasks — and an important distinction applies here. Technology-assisted review and continuous active learning have over a decade of case law validation and measurable performance data behind them. Courts have accepted TAR methodologies since Judge Andrew Peck’s landmark ruling in Da Silva Moore v. Publicis Groupe in 2012, and subsequent decisions have reinforced their defensibility. Nobody disputes that mature, well-understood AI-assisted review can reduce the volume of documents requiring human eyes by 80 to 90 percent when properly deployed. The global eDiscovery market already exceeds $15 billion and is forecast to grow at 8 to 11 percent annually through 2032, driven in large part by AI-enabled review and analytics.

The ROI challenge is sharper for the newer generative AI capabilities now being layered on top of those established workflows — summarization, privilege detection, document drafting, case strategy extraction. These tools are 18 months into enterprise deployment, not a decade. Exception handling, quality control, and contract structures around generative AI services in eDiscovery remain underdeveloped by the industry’s own admission. The question hanging over every AI-accelerated review is whether the cost savings are being reinvested in human quality control or simply pocketed — with oversight reduced in the name of efficiency. When the technology makes errors at scale, the consequences compound at scale too. A missed privileged document in a review of millions carries the same risk it always did; the only thing that changed is how fast the mistake was made.

The Verification Tax Nobody Measures

Beyond eDiscovery, the productivity claims that vendors attach to generative AI tools across industries deserve serious scrutiny — and the pattern they reveal has direct implications for legal work. A randomized controlled study by METR, a nonprofit AI research organization, published in mid-2025, recruited 16 experienced open-source software developers and randomly assigned 246 real coding tasks to be completed with or without AI tools. The developers using AI — primarily Cursor Pro with Claude 3.5 and 3.7 Sonnet — actually completed their tasks 19 percent slower than those working without it. The perception gap was jarring: those same developers estimated before starting that AI would make them 24 percent faster, and still believed they were 20 percent faster after completing the tasks. They felt productive. The stopwatch said otherwise.

The METR study measured software engineering, not legal drafting — and those are different disciplines with different complexity profiles. But the underlying dynamic it exposed is domain-agnostic: when professionals trust AI output without fully verifying it, they feel faster while actually losing time to the hidden costs of correction. Workday’s January 2026 research confirmed this across a broader workforce, finding that 37 percent of time supposedly saved by AI gets consumed by reviewing, correcting, and verifying AI-generated output. Only 14 percent of employees consistently achieved clear, positive net outcomes from their AI use.

In legal work, where precision carries professional liability, those verification costs are likely higher, not lower. Consider an illustrative scenario: a mid-size litigation firm that invested six figures in a generative AI drafting platform last year. An associate uses it to produce a motion to compel in 30 minutes instead of three hours. The partner reviewing it spends 90 minutes verifying every citation, checking for hallucinated case law, and rewriting passages that sound confident but say nothing. The net time saving is 60 minutes — and that assumes the verification catches every error. No published study has yet measured this net-of-verification cost in legal practice specifically, which is itself part of the problem. The firms absorbing these verification costs rarely quantify them, which means the productivity metrics they report to clients and in industry surveys systematically overstate AI’s net contribution.

The gap between perceived and actual productivity is not a minor inconvenience. It distorts investment decisions. When a firm’s leadership believes AI is saving 20 percent of associate time but the actual saving — net of verification — is closer to 5 percent, the return on their six-figure AI investment looks very different.

The Information Governance Blind Spot

Information governance professionals face a variant of the same problem that rarely makes the trade publication headlines. Vendors have aggressively marketed AI-powered records classification, automated retention scheduling, and defensible disposition workflows. The pitch is compelling: let machine learning sort through decades of accumulated data, classify it according to retention policies, and flag what can be deleted.

In practice, training these models on organization-specific retention schedules — which vary by jurisdiction, by record type, and by regulatory framework — remains a labor-intensive and error-prone process. An AI system that confidently classifies a document as eligible for disposition when it should have been held under a litigation hold creates a spoliation risk that no efficiency gain can offset. The audit trail requirements for defensible disposition mean that every AI classification decision must be traceable, explainable, and reviewable — adding layers of governance overhead that partially negate the time savings the technology was supposed to deliver.

The problem compounds for organizations operating across multiple jurisdictions. A multinational’s retention policy might touch GDPR’s right to erasure, US state privacy laws, SEC record-keeping requirements, and industry-specific regulations simultaneously. Training an AI model to navigate those overlapping obligations reliably — and documenting that it did so correctly — is a compliance challenge that vendors’ marketing materials consistently understate.

The Regulatory Cost Nobody Budgeted For

Compounding the investment challenge, a wave of regulatory requirements around AI in legal practice is adding compliance costs that most firms did not factor into their AI budgets. The American Bar Association’s Formal Opinion 512 established a national baseline requiring lawyers to verify all AI-generated legal citations before filing. California’s State Bar issued practical guidance mandating that attorneys understand large language model limitations — including hallucination risks and data privacy exposure — before deploying them. The New York State Bar Association’s AI Task Force produced a phased roadmap for secure AI adoption that creates ongoing compliance obligations.

The judiciary is charting its own uneven course. Since the Mata v. Avianca sanctions in 2023 — where attorneys were fined for submitting ChatGPT-hallucinated case citations — federal and state judges have issued hundreds of standing orders governing AI use in court filings, with no uniform standard. Some require disclosure of which AI tool was used and where; others demand certification that a human verified every citation; still others impose no requirements at all. Federal judges are themselves experimenting with AI in their chambers — even as the rules they impose on practitioners vary courtroom to courtroom. For litigants, the patchwork means that AI-assisted work product acceptable in one jurisdiction may trigger sanctions in the next — adding yet another compliance variable to the cost of deployment.

Those domestic requirements arrive alongside the EU AI Act, which becomes fully applicable on August 2, 2026. AI systems used in legal contexts — particularly those involved in access to justice or interpretation of law — plausibly face classification as high-risk under the Act, though how regulators will apply those categories to specific legal technology tools is still being interpreted through guidance documents and early enforcement decisions. Where a tool does fall into a high-risk category, the obligations are substantial: risk management frameworks, conformity assessments, technical documentation, and registration in the EU database. Industry estimates from early compliance analyses place implementation costs for high-risk AI systems at $2 million to $15 million depending on organizational size — figures the regulation itself does not prescribe but that reflect the operational burden of meeting its requirements. Penalties for non-compliance, by contrast, are statutory: up to 35 million euros or 7 percent of global turnover.

For legal technology buyers already struggling to demonstrate ROI from their AI investments, these regulatory costs represent a new line item that further erodes the business case. A firm that deployed generative AI tools in 2024 expecting quick efficiency gains now faces the prospect of spending additional resources to ensure those same tools meet evolving ethical and regulatory standards — before they have recouped the original investment.

The Broader Reckoning

Gartner added its own sobering projection in June 2025: over 40 percent of agentic AI projects across all industries will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls. The research firm estimated that only about 130 of the thousands of vendors marketing agentic AI capabilities are building genuine agent technology. The rest are engaged in what Gartner called “agent washing” — rebranding existing automation products, chatbots, and robotic process automation tools with an agentic label. The legal technology market, already fragmented and prone to buzzword adoption, is especially vulnerable to this dynamic.

At the enterprise level, the numbers reinforce the pattern. Forty-two percent of companies abandoned most of their AI initiatives in 2025, up from 17 percent the prior year, according to S&P Global’s Voice of the Enterprise survey. A report from MIT’s NANDA initiative, “The GenAI Divide,” found that roughly 95 percent of AI pilots delivered no measurable impact on profit-and-loss statements — though critics note the study’s six-month evaluation window may undercount longer-horizon returns. A Deloitte survey of director-to-C-suite leaders found 66 percent claiming productivity gains from AI, but only 20 percent reporting revenue growth — a gap that suggests much of the reported productivity either does not translate to financial results or gets absorbed by the cost of the AI infrastructure itself.

None of this means AI in legal technology is worthless. It means the industry has been measuring the wrong things, or measuring the right things in ways that flatter the technology rather than testing it. A parallel from outside the legal world is instructive. Estonia built one of the most celebrated digital government platforms on earth — a national e-state initiative that became a case study in public-sector technology, drawing delegations from dozens of countries eager to replicate its model. But in an April 2026 opinion piece for ERR News, the English-language service of Estonian Public Broadcasting, journalist Nils Niitra argued that the program’s real legacy was an expensive dependency: IT spending and government staffing both grew rather than shrank, and the promised leaner, cheaper state never materialized. Estonia’s digital investment created what Niitra described as a new layer of bureaucratic fat atop the old one — the paper folder became a digital folder, the stamp became a digital stamp, the queue became a portal, but nothing substantive changed.

Legal technology risks the same trajectory. A keyword-and-filter document review becomes an AI-intensive document review. A template-driven contract analysis becomes an AI-assisted contract analysis. A records clerk’s classification judgment becomes an algorithm’s classification judgment. The vocabulary changes; the underlying workflow stays remarkably similar. And layered on top are new costs — licensing fees, integration expenses, training hours, quality assurance processes for AI outputs, regulatory compliance overhead, and the specialized staff required to manage and prompt the systems. The old process has not been replaced. It has been supplemented at a premium.

General counsel offices are noticing — and the structural inertia is becoming harder to defend. Axiom’s 2026 GC Survey of 516 senior in-house legal leaders across eight countries found that 61 percent continue sending work to law firms out of habit rather than strategic choice, even as 80 percent plan to move certain firm work in-house or to alternative providers within 24 months. When 94 percent of in-house leaders express interest in alternative legal service models that combine flexible talent with vetted AI tools — as Axiom’s research found — that is not enthusiasm for technology. That is a market signal from buyers who feel they are paying for someone else’s AI experiment.

The path forward requires a level of honesty the industry has so far resisted. Firms and legal technology vendors need to separate measurable, repeatable productivity gains from the warm glow of novelty. They need to publish net time savings — accounting for verification, correction, and oversight — rather than gross figures that ignore the human labor still required downstream. They need to address the billing model contradiction head-on rather than pocketing efficiency gains while raising rates. And they need to factor regulatory compliance costs into ROI calculations from the outset, not as an afterthought when the ethics opinion or the enforcement notice arrives.

For eDiscovery professionals, information governance specialists, and cybersecurity teams who increasingly intersect with legal workflows, the stakes extend beyond billable hours. An AI-driven review that sacrifices thoroughness for speed creates data governance risks. Privilege calls made by algorithms without adequate human oversight expose organizations to waiver arguments. Automated classification systems that have not been validated against jurisdiction-specific requirements generate compliance liabilities that may take years to surface. And AI-powered disposition workflows that lack defensible audit trails turn a records management tool into a spoliation time bomb.

The technology itself is not the problem. The problem is an industry that adopted AI with the enthusiasm of a convert but the rigor of a bystander — spending freely, measuring loosely, and deferring the hard question of whether any of it makes the practice of law better, cheaper, or more accessible for the people who ultimately pay for it.

If the legal industry cannot answer that question with data rather than anecdotes, then what it has built is not innovation. It is an expensive dependency dressed in a smarter interface — and the invoice, as always, lands on someone else’s desk.

What would it take for your organization to require net productivity metrics — verification costs included — before renewing a single AI contract?

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.