Editor’s Note: Enterprise AI isn’t stalling because organizations lack tools—it’s stalling because too much of the value concentrates in a few power users while the enterprise remains stuck in pilots. Recent research draws a sharp line between “AI is available” and “AI is operational”: adoption is widespread, but enterprise-scale outcomes remain rare.

That imbalance matters most where risk and value collide. When capability lives with a few “one-eyed kings,” security and governance teams lose visibility into where sensitive data goes, how decisions are formed, and what evidence trails are being created. With new governance expectations taking hold and AI prompts now appearing in litigation, unmanaged AI usage is no longer just an efficiency issue—it’s an information governance and compliance exposure.

This article maps the move from individual advantage to collective capability: role-based training, governance that enables (not blocks), standardized safe prompts and tools, and workflow redesign that converts experimentation into enterprise-grade impact. For cybersecurity, privacy, regulatory compliance, and eDiscovery leaders, the takeaway is direct: you can’t defend—or produce—what you can’t see.


Content Assessment: From One-Eyed Kings to Collective Sight in Enterprise AI

Information - 92%
Insight - 91%
Relevance - 92%
Objectivity - 90%
Authority - 90%

91%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "From One-Eyed Kings to Collective Sight in Enterprise AI."


Industry News – Artificial Intelligence Beat

From One-Eyed Kings to Collective Sight in Enterprise AI

ComplexDiscovery Staff

“In the land of the blind, the one-eyed man is king.” Erasmus wrote those words in 1500. As we enter 2026, they describe the state of AI adoption in enterprise organizations with uncomfortable precision.

The evidence accumulated throughout 2025 tells a consistent story. McKinsey’s State of AI survey, conducted mid-year among 1,993 professionals across 105 countries, found that 88% of organizations were using AI in at least one business function—a ten-percentage-point increase from 2024. Yet only 7% reported that AI had been fully scaled across their enterprises. Nearly two-thirds remained stuck in what McKinsey called “experiment or pilot” mode.

EY’s Work Reimagined Survey, which polled 15,000 employees and 1,500 employers across 29 countries in late 2025, added the human dimension to this picture. While 88% of employees used AI at work, their use was “mostly limited to basic applications, such as search and summarizing documents.” Only 5% were using AI in advanced ways that fundamentally transformed how they worked.

The blindness afflicting most B2B organizations was never a lack of AI tools. It was—and remains—a lack of organizational sight: the inability to see how AI could reshape workflows, the failure to invest in training that unlocks transformative use, and the governance gaps that leave individual “one-eyed kings” to build their own shadow kingdoms of productivity. As we navigate 2026, with new AI governance laws now in effect and the competitive pressure intensifying, organizations can no longer afford this blindness.

Stage One: Collective Blindness

The data from 2025 painted a consistent picture across research firms: widespread AI access had not translated into strategic capability. Microsoft’s Work Trend Index, drawing on survey data from 31,000 workers across 31 countries plus trillions of Microsoft 365 productivity signals, identified what it called the emergence of the “Frontier Firm”—organizations built around “intelligence on tap” and human-agent teams. But these firms remained the exception. Eighty-two percent of leaders said 2025 was a pivotal year to rethink core aspects of strategy and operations. Many did not act in time.

McKinsey’s analysis revealed the operational reality behind these numbers. Large enterprises with $5 billion or more in revenue were more likely to have crossed the scaling threshold, but the pattern held broadly: endless proofs of concept across different pockets of the business, each showing promise, rarely converging into shared platforms or redesigned workflows.

Three persistent blockers emerged from the research: fragmented data and legacy technology, workflows that were never redesigned for AI, and a lack of clear scaling priorities. As McKinsey’s researchers noted, “Most organizations are still navigating the transition from experimentation to scaled deployment, and while they may be capturing value in some parts of the organization, they’re not yet realizing enterprise-wide financial impact.”

This was—and for many organizations still is—the operational definition of organizational blindness: the technology is present, but the vision to deploy it strategically is absent. The organization cannot see what it has not learned to look for.

Stage Two: The Rise of the One-Eyed Kings

In this environment of collective blindness, individuals with even partial AI capability gained a disproportionate advantage. The EY survey quantified this phenomenon: employees who received more than 81 hours of annual AI training reported productivity gains averaging 14 hours per week—roughly 35% of a standard work week recaptured through AI-augmented work.

But here was the tension: only 12% of employees received AI training they considered sufficient to unlock these productivity benefits. The gap between AI availability and AI capability created the conditions for shadow adoption. According to EY, 37% of employees globally were bringing their own AI solutions to work, despite employer attempts to provide internal tools.

These “one-eyed kings”—employees who independently developed AI proficiency—weren’t rebels or policy violators by disposition. They were rational actors filling a capability vacuum. When organizations failed to provide adequate training (88% of employees lacked sufficient AI education) and internal tools lagged behind employee needs, shadow AI became an inevitable response to unmet productivity potential.

The productivity premium was real—and remains so today. EY’s research revealed that organizations with weak talent strategies—characterized by ineffective training, weak company culture, and misaligned rewards—saw AI productivity gains lag by over 40% compared to organizations that built what the research called “Talent Advantage.” But only 28% of organizations were on track to achieve this integrated approach to talent and technology.

The one-eyed king sees this gap clearly. While the organization debates AI strategy in committee, they’re using personal accounts to draft procurement RFPs, automate lead scoring, and synthesize customer feedback into actionable insights. Their partial sight—even limited to basic prompting skills—confers sovereignty in a kingdom of the blind.

The Security Void: Governing What You Cannot See

For cybersecurity and information governance professionals, the one-eyed king represents a specific category of risk: the productive employee who operates outside traditional IT safety nets. When more than a third of employees bring unsanctioned AI tools to work, the organization loses visibility into how its data is being processed, by whom, and through what systems.

Gartner’s Top 10 Strategic Technology Trends for 2026—now the framework guiding this year’s technology investments—frames this challenge in terms of two imperatives: preemptive cybersecurity and digital provenance. As organizations face an exponential rise in threats targeting networks, data, and connected systems, Gartner forecasts that by 2030, preemptive security solutions will account for half of all security spending. The shift from reactive defense to proactive protection is already underway.

Digital provenance—the ability to verify the origin, ownership, and integrity of software, data, media, and processes—has become critical. When employees feed sensitive B2B contract terms or proprietary specifications into public AI models, the organization loses the ability to verify the provenance of subsequent business decisions. The one-eyed king sees the productivity gain; the blind eye turns toward long-term security implications.

The response cannot be prohibition—the 2025 data showed that restriction simply drives adoption underground. Instead, security professionals should advocate for “Bring Your Own AI” policies focused on data hygiene: vetted prompt templates that redact sensitive variables before data leaves the internal network, approved tool lists that balance capability with security, and monitoring systems that provide visibility without stifling productivity.

The eDiscovery Evolution: When Prompts Become Evidence

Perhaps nowhere is the tension between individual AI adoption and organizational governance more acute than in eDiscovery. The legal implications of shadow AI are no longer theoretical—they appeared in actual litigation throughout 2025, and the precedents being set will shape practice for years to come.

At Relativity Fest’s Annual eDiscovery State of the Union in late 2025, the emerging consensus was unambiguous. As the conference recap noted, prompts may now be discoverable—generative AI conversations have officially made their way into real document sets. The responsibility falls on the user: every conversation with AI should be treated as potentially discoverable data.

This raises complex questions about privilege that courts are now actively addressing. Early judicial suggestions indicate that prompting may qualify not just as fact work product but as opinion work product—the legal reasoning embedded in how questions are framed. But the case law remains nascent, and organizations cannot rely on privilege protections that courts have not yet firmly established.

The EU AI Act, which established one of the first formal regulatory frameworks for AI in 2025, adds an international dimension. Industry observers at the Relativity conference emphasized that the Act fundamentally changes the rules of accountability in AI development, creating new compliance requirements for the hundreds of generative AI tools now operating in the eDiscovery space alone.

For legal teams in 2026, the implication is immediate: if a B2B contract dispute arises, the “vision” used by a one-eyed king to draft or analyze that contract becomes a discoverable item. If the logic underlying a business decision was developed in a personal AI thread, the organization faces a significant hurdle in meeting its production obligations.

The solution is not to prohibit AI use but to mandate AI citation. Just as employees cite legal precedent or data sources, they should document the AI models and prompts used in business decisions. This creates the evidentiary breadcrumb trail that eDiscovery now requires—ensuring that the sight used today can be defended tomorrow.

Stage Three: Building Collective Sight

The goal for any B2B organization in 2026 is not to punish the one-eyed kings but to replicate their vision across the enterprise. When the organization sees clearly, individual advantage disappears—replaced by collective capability that compounds across every function.

McKinsey’s 2025 research identified what distinguished the roughly 6% of organizations that qualified as ‘AI high performers’—those attributing more than 5% of operating profits to AI and reporting significant value from their AI investments. The differentiators were organizational, not technological. High performers were 3.6 times more likely than others to say their organization intended to use AI to drive transformative change in their businesses, not just incremental efficiency gains. They thought bigger from the start.

Workflow redesign emerged as a critical separator. Fifty-five percent of high performers fundamentally reworked processes when deploying AI—almost three times the rate of other firms. They didn’t layer AI on existing workflows; they redesigned workflows around AI capabilities. This willingness to rearchitect, rather than simply augment, explained much of the performance gap.

High performers also set more ambitious objectives. While 80% of all respondents cited efficiency as an AI goal, high performers were significantly more likely to also target growth and innovation. Organizations that set broader objectives reported achieving a wider range of enterprise-level benefits—improved customer satisfaction, competitive differentiation, and revenue growth alongside cost savings.

Investment magnitude mattered as well. One-third of high performers allocated more than 20% of their digital budget to AI, compared to just 7% of other organizations. The correlation between investment scale and results was direct: organizations that spent more saw better results. Half-measures produced half-results.

Microsoft’s “Frontier Firm” concept aligns with these findings. The research suggested that within two to five years, this model would become the competitive baseline. For organizations that delayed through 2025, the runway is now shorter.

EY’s framework for achieving “Talent Advantage” reinforces these findings—excelling across AI adoption, learning, talent health, organizational culture, and reward structures. Organizations that master all five unlock transformational value; those that neglect any dimension see their AI investments underperform dramatically. The question for 2026 is whether the majority can close this gap.

From Blindness to Sight: The Path Forward

The 2025 research points to specific interventions that bridge the gap between individual AI proficiency and organizational capability. With new governance frameworks now in effect, these steps have become urgent.

The first imperative is to surface the shadow kingdom rather than suppress it. Organizations should create structured opportunities—call them “AI Office Hours” or capability showcases—where power users can demonstrate their workflows in a safe environment. The small percentage of employees using AI in advanced ways have developed knowledge that the organization needs. Rather than treating them as policy risks, study their vision and replicate it across the collective. Practitioners at Relativity Fest emphasized that what distinguishes leading firms in the AI era is not the technology itself but the people—the human judgment that guides deployment. The one-eyed kings aren’t the problem; they’re the proof of concept.

Closing the training gap follows naturally. The productivity gains documented by EY represent enormous untapped value, but the vast majority of employees still lack sufficient training. Organizations should establish role-based capability training courses—what McKinsey identified as a key practice among high performers—to ensure employees at each level know how to use AI capabilities appropriately. The investment pays for itself: well-trained employees are less likely to resort to shadow tools that create governance risks.

Consistency requires infrastructure. When different managers use different AI models to analyze the same market trend, the organization ends up with conflicting strategies and no way to audit how conclusions were reached. A centralized prompt library creates a single source of truth for how the organization interacts with AI. It also addresses the security concern: vetted templates can be designed to redact sensitive variables before data leaves internal systems, transforming governance from a barrier into an enabler.

Documentation must become standard practice. Given the discoverability concerns and new state AI laws now in effect, organizations should require employees to record the AI models and prompts used in business decisions. This isn’t bureaucracy—it’s risk management. Just as employees cite legal precedent or data sources, they should document their interactions with AI. The evidentiary trail protects both the employee and the organization when decisions face legal or regulatory scrutiny.

But the most important intervention is also the most difficult: workflow redesign. The clearest finding from McKinsey’s research was that workflow redesign has the biggest effect on an organization’s ability to see profit impact from AI.” Organizations that simply add AI to existing processes capture a fraction of the potential value. High performers rearchitect workflows, decision points, and task ownership to align with AI capabilities. This requires breaking work down into component tasks, determining which are best performed by AI versus humans, and reconstructing processes accordingly. It is slow, difficult, and disruptive. It is also what transforms one-eyed kings into an organization with collective sight.

The Cost of Remaining Blind

The 88-7 gap documented in 2025—88% adoption, 7% scaled deployment—represented both the opportunity and the risk facing B2B organizations. The one-eyed kings proved that AI could transform individual productivity. The question for 2026 is whether their vision will remain siloed in shadow kingdoms or become the foundation for enterprise-wide capability.

The window for action has narrowed. The frontier‑model accountability timeline has begun. New state AI governance laws are now in effect. The EU AI Act is already reshaping global compliance expectations ahead of its full applicability in 2026. AI prompts and outputs are entering the evidentiary record in active litigation. The conditions that once let organizations stay blind while one‑eyed kings built shadow kingdoms have fundamentally changed.

The professionals who manage enterprise data—security leads, governance officers, eDiscovery experts—must be the ones to lead the organization toward the light. They see the risks that the one-eyed kings cannot: the discoverable prompts, the provenance gaps, the regulatory obligations now in force. But they must also recognize that prohibition is not a strategy. Shadow AI existed because organizations failed to provide legitimate pathways to AI capability.

The choice is now immediate: invest in collective sight—through training, governance, workflow redesign, and cultural transformation—or watch your most productive employees build private kingdoms that the organization can neither see nor defend. In 2026, blindness is no longer just a competitive disadvantage. It’s a compliance risk. The 2025 data showed the path forward. The question is whether your organization has the vision to take it before the window closes.

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.