Editor’s Note: AI literacy has become the baseline expectation for cybersecurity, information governance, and eDiscovery work—and the organizations treating it as “nice to have” are already paying for that mistake in breach costs, courtroom sanctions, and regulatory exposure. What’s changing isn’t only the speed of AI adoption; it’s the accountability attached to it. From shadow AI quietly moving sensitive data into unapproved tools, to judges demanding verification of AI-assisted filings, to EU requirements that make workforce competence part of compliance, the common thread is simple: if you can’t explain, test, and defend what an AI system produced, you can’t responsibly rely on it.

This piece connects the skills gap to tangible operational risk and emerging legal duty, then lands on practical actions practitioners can take now—especially those “in the middle” who don’t set policy but still carry the consequences. The takeaway is both urgent and workable: inventory before policy, skepticism before automation, and role-calibrated training over one-time awareness modules. AI isn’t a passing tooling shift; it’s a governance reality. The question is whether your team can prove it understands the systems it’s already using.


Content Assessment: The AI Literacy Gap Is Now a Security and Compliance Liability

Information - 94%
Insight - 92%
Relevance - 92%
Objectivity - 93%
Authority - 91%

92%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "The AI Literacy Gap Is Now a Security and Compliance Liability."


Industry News – Artificial Intelligence Beat

The AI Literacy Gap Is Now a Security and Compliance Liability

ComplexDiscovery Staff

The vulnerability didn’t announce itself. It arrived quietly — in employees feeding confidential documents into unauthorized chatbots, in courtrooms demanding accountability for AI-generated legal submissions, and in security operations centers where analysts are now expected to interrogate the outputs of systems they didn’t build and may barely understand.

The numbers frame the problem starkly. Nearly 48% of IT decision-makers identify a lack of staff with sufficient AI expertise as the biggest barrier to adoption, even as 97% of organizations are either already using or planning to implement AI-enabled cybersecurity solutions. Organizations are racing to deploy the technology while simultaneously struggling to find people who understand how to govern it, secure it, or challenge it. That disconnect has real consequences — operationally, legally, and defensively.

The Skills Gap Has a Cost

The Fortinet 2025 Cybersecurity Global Skills Gap Report reveals that while 80% of organizations say AI is already helping their teams become more effective, nearly half identify a lack of staff expertise as the most significant barrier to secure implementation. Candidates with cybersecurity AI experience rank among the scarcest skill sets in the labor market — second only to network engineering and security expertise.

This isn’t simply a hiring problem. It is a structural vulnerability. When organizations deploy AI-powered threat detection, automated document review, or generative AI tools across departments without ensuring that the professionals overseeing them understand how those systems reason, fail, or hallucinate, the entire governance architecture becomes brittle. The 2025 ISC2 Cybersecurity Workforce Study — drawing on data from 16,029 practitioners surveyed in May and June 2025 — found that nearly nine in ten respondents had experienced at least one significant cybersecurity consequence because of a skills deficiency within their team or wider organization. Notably, this was also the first year ISC2 formally declined to publish a global workforce gap headcount estimate, deliberately shifting its measurement framework toward skills deficits rather than unfilled positions. This methodological decision says more about the nature of the problem than any headcount figure could. Consequences, in this context, are not an abstraction — they mean breaches, compliance failures, and incidents that could have been prevented.

One practical response for security and governance professionals at any level is to begin documenting AI tool usage within their teams, not to police it, but to understand it. Knowing what tools employees are reaching for — even informally — is the starting point for any meaningful AI literacy program. Inventory before policy is the sequence that actually works.

The Practitioner in the Middle

Much of the conversation about AI literacy concentrates on the organizational level — what CISOs should mandate, what governance leaders should build, what legal operations heads should require. That framing is necessary but incomplete. The professionals who will absorb the most immediate risk from the AI literacy gap are not the ones setting policy. They are the senior analysts, the experienced eDiscovery project managers, the mid-tenure records and information managers who are being evaluated today on their ability to work alongside AI systems that their organizations are still learning to govern.

For these practitioners, AI literacy does not require waiting for a formal training program. A useful starting point is developing what researchers describe as output skepticism — the habit of asking, for any AI-generated result, whether the system could plausibly have reached that conclusion incorrectly and, if so, what the downstream consequences would be. Effective AI literacy is not about mastering the tool — it is about knowing where the tool ends and your own judgment begins, and that organizations need to make it explicitly acceptable — and even professionally valued — for employees to pause and ask whether an AI output makes sense. For practitioners without the authority to redesign governance frameworks, building that habit of structured skepticism is a professional contribution they can make independently, starting now.

A 2025 peer-reviewed analysis published in the journal Business Horizons found that AI literacy must be multidimensional and role-sensitive — that without conceptual understanding, teams risk misuse; without ethical awareness, they may violate trust or compliance obligations; and without practical skills, even well-designed AI systems may fail to deliver impact. That role-sensitive framing matters for the practitioner in the middle. The level of AI literacy a project manager needs to responsibly oversee AI-assisted document review is different from what a CISO needs to evaluate an AI security platform — and conflating them produces training programs that satisfy neither audience. Professionals who can articulate that distinction within their organizations, and advocate for role-calibrated training rather than one-size-fits-all compliance modules, are already exercising the kind of informed judgment that AI literacy, at its core, is meant to produce.

Shadow AI: The Governance Time Bomb

Nowhere is the AI literacy gap more dangerous than in the context of shadow AI — the use of artificial intelligence tools by employees without organizational approval or oversight. The 2024 Microsoft and LinkedIn Work Trend Index, drawing on survey data from over 31,000 workers across 31 countries, found that 75% of knowledge workers already used AI at work, with 78% of those users bringing their own AI tools rather than relying on company-provided solutions. Given the pace of AI adoption since that research was published, current figures are almost certainly higher.

For information governance professionals, this represents a data management crisis in slow motion. According to the IBM Cost of a Data Breach Report 2025 — the firm’s 20th annual study, conducted by the Ponemon Institute across 600 organizations globally — organizations with high levels of shadow AI faced an average of $670,000 in additional breach costs compared to those with low or no shadow AI, with one in five organizations reporting a breach attributed to shadow AI. That liability isn’t hypothetical. It is already showing up on balance sheets.

A 2024 survey of over 12,000 white-collar employees, published in 2025 by KnowBe4 and conducted by Censuswide across six countries revealed that 60.2% had used AI tools at work, but only 18.5% were aware of any official company policy regarding AI use. That gap — between adoption and awareness — is precisely where data leakage, privilege breaches, and regulatory exposure live. When an employee pastes client communications into a public large language model to draft a response faster, that employee is likely not attempting to violate data policy. They are simply trying to get their job done. The responsibility for closing that gap rests with governance leaders, not with the individual contributor.

The practical implication is that acceptable use policies alone are insufficient. Organizations must pair those policies with training that explains, in plain language, why the risk exists and how to recognize it. Security leaders must perform due diligence to educate employees on how to use AI tools safely, how AI uses their data, and which tools are safe for sharing company information — getting ahead of employee adoption is now the first step in preventing potential data breaches.

The eDiscovery Reckoning — and the Accountability It Demands

In the legal technology world, AI has moved from pilot programs to operational necessity faster than most practitioners anticipated. The 2025 Lighthouse AI in eDiscovery Report — based on survey responses from 225 legal professionals across corporate legal teams and law firms — found that compared to the prior year, legal professionals are moving beyond curiosity and initial experimentation, with a real increase in AI deployment across eDiscovery, contract review, and research. The report also reveals a growing divide between early adopters and those hesitant to embrace AI, suggesting that firms investing now may gain a competitive advantage in efficiency, cost savings, and decision-making. For eDiscovery professionals, that acceleration is not simply a technology story — it is a professional accountability story.

Document review is the primary driver of eDiscovery costs. Industry estimates consistently put document review at more than 80% of total litigation spend — a figure commonly cited at $42 billion annually. When technology-assisted review is paired with generative AI summarization, reviewer hours can be substantially reduced. The efficiency gains are real. But efficiency is only half the equation, and it is the less contested half.

The more urgent question for eDiscovery professionals is who bears professional responsibility when AI-assisted review produces a privilege error, misses a responsive document, or generates a submission containing a fabricated citation. That question has already reached courtrooms and ethics bodies. In Mata v. Avianca, decided by the Southern District of New York in 2023, attorneys were sanctioned after submitting a brief containing judicial decisions fabricated by ChatGPT—a decision that has since become the defining precedent for practitioners’ responsibility for AI-generated legal work product. In a related case, United States v. Cohen, the Southern District of New York criticized an attorney for citing three cases that were hallucinated by Google Bard, reinforcing that the court — not the AI system — holds the attorney responsible for the accuracy of every submission.

Bar ethics bodies have responded accordingly. In July 2024, the ABA issued its first formal ethics guidance on lawyer use of AI tools — Formal Opinion 512 — applying existing Model Rules of Professional Conduct to the challenges of generative AI and making clear that the duty of competence under Rule 1.1 requires lawyers to understand the benefits and risks of the relevant technology. That opinion set a national floor, and states have been building above it. As of 2025, more than 30 states have released AI-specific guidance for attorneys. New York requires at least one CLE credit in cybersecurity, privacy, and data protection per biennial cycle — a category that now increasingly encompasses AI competency programming. In Pennsylvania, individual federal judges have issued standing orders requiring explicit disclosure of AI use in court submissions, and the Pennsylvania Bar Association’s Joint Formal Opinion 2024-200 establishes ethical standards for AI use statewide — representing a growing but not yet uniform disclosure mandate. Across jurisdictions, the consistent principle is that lawyers remain responsible for any incorrect information generated by an AI program and must verify citations and information produced by AI for accuracy.

For eDiscovery professionals and legal operations teams, the implication is direct and measurable: AI literacy in this context is not a general competency. It is a professional conduct obligation with sanctions attached. Understanding how a large language model handles privilege determinations, recognizing the conditions under which AI document classification produces systematic error, and being able to articulate a validation methodology to opposing counsel are no longer aspirational skills. They are the professional floor that ethics rules and case law have now established.

The Regulatory Signal Is Global

The United States is not alone in treating AI literacy as a legal mandate rather than an organizational preference. Under Article 4 of the EU AI Act — the world’s first comprehensive statutory framework for artificial intelligence — AI literacy obligations became enforceable on February 2, 2025, requiring all providers and deployers of AI systems operating in or serving EU markets to ensure their staff holds sufficient AI literacy to use those systems responsibly. The regulation applies based on where AI systems are deployed and whose data they touch, not where the deploying organization is incorporated — meaning that U.S.-based cybersecurity firms, law firms, and information governance teams with EU clients or EU data processing obligations are already inside its scope. Article 4 carries no standalone direct fine, but failure to train staff is treated as a significant aggravating factor when national market surveillance authorities — whose enforcement authority over AI literacy activates in August 2026 — assess penalties for other violations. Separately, the Act’s penalty regime for prohibited AI practices under Article 5 became active on August 2, 2025, with fines reaching up to €35 million or 7% of global annual turnover — whichever is higher. For information governance professionals in particular, the Act’s requirements — a complete AI inventory with risk classification, documented compliance roles distinguishing suppliers from deployers, and verified AI competence among all staff interacting with covered systems — read less like a foreign regulatory obligation and more like a formal codification of the governance framework that responsible organizations should already be building. The compliance window for high-risk AI systems closes on August 2, 2026. Organizations without an AI literacy foundation in place before that date will be attempting to meet a documented legal standard with an unprepared workforce.

The Trump Administration’s July 2025 AI Action Plan, “Winning the Race,” reinforces this direction domestically, calling for expanding AI literacy and skills development across the American workforce, with the Departments of Labor, Education, the National Science Foundation, and the Department of Commerce each directed to prioritize AI skill development as a core objective of their education and workforce funding streams. The plan also recommends that the U.S. Department of the Treasury issue guidance clarifying that AI literacy and skills development programs may qualify for eligible educational assistance as a tax-free working condition fringe benefit under Section 132 of the Internal Revenue Code. For organizations that have been looking for a financial justification to invest in AI training, that guidance removes one of the most common procurement objections.

On February 13, 2026, the Department of Labor issued its national AI Literacy Framework — Training and Employment Notice 07-25 — a formal directive to every state workforce agency, American Job Center, and community college in the country to begin delivering AI literacy training immediately, with federal workforce dollars now explicitly authorized for AI skills training through the WIOA funding mechanism. Taken together, the EU AI Act’s enforceable Article 4 obligations, the White House AI Action Plan, and the DOL’s national framework constitute a converging regulatory environment in which AI literacy has transitioned from voluntary best practice to binding expectation on both sides of the Atlantic. The policy window for treating it as optional has closed.

What Good AI Literacy Looks Like — And Where It Fails

This is the moment to say plainly what much of the discourse around AI literacy quietly avoids: most current enterprise AI training programs are not working. ISACA’s research identifies several common failure patterns — generalized, one-time AI training that fails to engage employees or address their specific needs; resistance from employees and leaders who see AI as disruptive to established workflows; cost and investment hesitancy from organizations unsure whether training investment will produce measurable business impact; and the persistent fear among employees that learning AI tools signals that their roles are replaceable, causing avoidance rather than engagement.

The organizations that have moved past these failure modes share a common characteristic: they treat AI literacy as a continuous, role-calibrated program rather than a compliance event. Industry research underscores the problem’s scope — 86% of business leaders say they want more training in responsible AI use, yet more than half report their organizations fall short in educating staff on AI ethics. That gap between stated intent and delivered training is where governance failures are born. An employee who completes a one-hour annual AI awareness module has not developed AI literacy. They have documented participation in a program that may create more organizational complacency than competency.

For professionals who have watched previous waves of enterprise technology promises — big data, blockchain, robotic process automation — arrive with declarations of transformation and depart with modest operational changes and unrealized governance frameworks, the AI literacy conversation can feel like a familiar loop. That skepticism is earned and should be acknowledged. The difference this time is that the consequences of the literacy gap are already quantified — in breach costs, in court sanctions, in bar ethics opinions, in federal workforce mandates, and now in EU statutory penalties — in ways that previous technology waves never produced this early in the adoption cycle. The risk is not theoretical. It has already been priced into litigation, regulation, and insurance.

What Professionals Can Do Now

AI literacy, in practice, does not require every cybersecurity analyst or records manager to become a data scientist. It requires a sufficient foundational understanding to work effectively alongside AI systems, to recognize when those systems are producing unreliable outputs, and to make governance decisions grounded in how the technology actually behaves—not how it is marketed.

IBM’s 2025 Cost of a Data Breach Report makes clear that AI adoption is outpacing both security and governance at most organizations — and that closing the AI literacy gap is no longer just a workforce development objective but a direct cost-control measure, with ungoverned shadow AI already adding hundreds of thousands of dollars to average breach costs.

For governance and compliance teams, building an AI use registry — a live catalogue of approved and tested tools with documented data-handling practices — turns an invisible risk into a managed one and directly satisfies the AI inventory requirements now mandated under the EU AI Act for organizations operating in that jurisdiction. For security operations professionals, developing fluency with how machine learning models generate and score alerts will improve the quality of human judgment applied to the cases that AI escalates. For eDiscovery practitioners, understanding the validation and quality control methodologies required for AI-assisted review is now as essential as understanding the chain of custody — and, under ABA Formal Opinion 512 and its state-level equivalents, it is now an ethics requirement as well. For the mid-level practitioner without organizational authority to mandate any of these changes, the most durable professional investment is documented, demonstrable AI output validation — the habit of showing, in writing, how an AI-assisted work product was checked, questioned, and verified before it reached a client, court, or regulator.

The 2025 State of Data and AI Literacy Report found that 69% of organizational leaders now rank AI literacy as essential for daily workflows — a 7% increase from the prior year — and that organizations cultivating both data and AI literacy simultaneously are better positioned to harness machine learning insights responsibly while minimizing bias and compliance risk. The two competencies reinforce each other, and building them together is the more durable investment.

The labor market is reflecting this shift in concrete terms. PwC’s 2025 Global AI Jobs Barometer — based on an analysis of close to one billion job advertisements from six continents — found that AI-skilled workers command an average 56% wage premium, a figure that doubled from 25% just one year earlier, suggesting the premium is accelerating rather than normalizing. Separately, LinkedIn data cited by the World Economic Forum shows a 70% year-over-year increase in U.S. roles requiring AI literacy — meaning demand is outpacing supply even as the number of available roles grows. For professionals in cybersecurity, information governance, and eDiscovery specifically, the IAPP’s 2025 Salary and Jobs Report added AI governance to its compensation benchmarking for the first time, reflecting formal recognition that AI governance expertise has become a distinct, compensable professional discipline rather than an extension of existing privacy or compliance roles.

The professionals who will define the next decade of cybersecurity, information governance, and eDiscovery are not necessarily the ones who understand AI most deeply from a technical standpoint. They are the ones who understand it well enough to govern it, challenge it, and explain to a court, a regulator, or a boardroom exactly why the AI said what it said — and why that answer should or should not be trusted.

As AI systems move from tools to decision-makers, and as the legal and regulatory environment tightens around how those decisions are documented and defended, the professionals who invested in AI literacy early will carry something their peers cannot quickly acquire: a demonstrated record of informed, accountable AI oversight at a moment when courts, clients, and regulators on both sides of the Atlantic have begun demanding exactly that.

Which raises the question every organization in this space should be sitting with right now: If your AI tools were audited tomorrow — their inputs, their outputs, their governance trail, and the people responsible for overseeing them — would your team be ready to defend every decision those systems made on your behalf?

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.