Editor’s Note: A Colorado magistrate judge’s March 30 ruling in Morgan v. V2X, Inc. has handed the discovery bar a template for how protective orders must treat generative AI — and in doing so, exposed a harder question the bar has been circling for a year. When an autonomous agent plans, decides, and executes across the eDiscovery lifecycle without a human pressing “run” between steps, at what point does its existence trigger a duty to disclose under Federal Rule of Civil Procedure 26(f)?

For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, this is not an academic question. Corporate legal operations leaders are already running agentic review pipelines. Incident-response teams are letting agents draft breach-notification work that will later become discoverable. Outside counsel are signing meet-and-confer reports without a working definition of where the human-in-the-loop stops.

This article traces the Morgan ruling into that gap, contrasts the efficiency promise of agentic eDiscovery with the sovereignty architecture those workflows now require, and sketches three questions every Rule 26(f) conference should answer before one side has to explain itself to a judge. Watch for vendor contract language, audit-trail reconstructability, and the Sedona Conference’s forthcoming drafting projects in the months ahead.


Content Assessment: When agents act: the Rule 26(f) disclosure threshold for agentic AI in eDiscovery

Information - 94%
Insight - 95%
Relevance - 95%
Objectivity - 94%
Authority - 92%

94%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "When agents act: the Rule 26(f) disclosure threshold for agentic AI in eDiscovery."


Industry News – eDiscovery Beat

When agents act: the Rule 26(f) disclosure threshold for agentic AI in eDiscovery

ComplexDiscovery Staff

A Colorado magistrate judge rewrote a protective order on March 30, forcing a pro se plaintiff to name the AI tool he had been using on confidential discovery materials. The ruling arrived quietly, but its implications will not.

Magistrate Judge Maritza Dominguez Braswell’s opinion in Morgan v. V2X, Inc., No. 25-cv-01991-SKC-MDB, 2026 WL 864223 (D. Colo. Mar. 30, 2026), did something neither of the two earlier federal AI-and-privilege rulings had attempted. It told litigants exactly what a protective order must say before confidential information touches a generative AI platform. And it arrived at the precise moment the discovery bar is wrestling with a harder question — what happens when the AI in question is not a tool the lawyer queries, but an autonomous agent that plans, decides, and executes across the eDiscovery lifecycle without a human pressing “run” between steps.

The Morgan opinion resolves a narrow dispute between an unrepresented employee and his former employer. But the reasoning reads as a map for the next fight. Judge Braswell held that a pro se litigant’s AI-assisted materials are protected work product under Rule 26(b)(3), rejected the argument that routing information through an AI vendor automatically waives protection, and then amended the protective order to bar any party from uploading confidential information to an AI platform unless the provider is contractually barred from training on inputs, restricted from disclosing inputs to third parties except where essential for service delivery, and obligated to delete data on demand. Commentators, including attorneys at ACEDS and at Sergenian Law, both writing in early April 2026, have described Morgan as the most consequential AI-in-litigation opinion yet issued, and both conclude that consumer-tier platforms without enterprise contracts cannot safely touch material designated confidential under a protective order.

That is the manageable version of the problem. The harder one is agentic.

Agentic eDiscovery tools — platforms marketed by vendors including Exterro and Relativity, along with a growing roster of entrants — deploy specialized agents that plan multi-step tasks: reconstructing timelines from unstructured data, flagging likely privileged material, running early case assessment triage, even drafting review memos. Exterro’s April 2026 vendor guidance, published through Lexology, frames this as “human-in-the-loop validation,” in which agents provide citations and reasoning that a human expert then confirms. A February 2026 vendor toolkit from Relativity, reviewed by eDiscovery Today, sorts agents into four categories of autonomy, from systems a human pilots step-by-step to systems that run end-to-end without interruption, and maps each category to the legal work it fits. The common thread across these vendor materials is a quiet concession: the defensibility of agentic eDiscovery depends on where, exactly, the human stops reviewing and the agent starts acting.

Federal Rule of Civil Procedure 26(f) does not, on its face, require parties to disclose which AI tools sit inside their review workflow. The rule directs parties to meet and confer at least 21 days before the scheduling conference and to develop a written discovery plan addressing preservation, form of production, privilege, and any issues about electronically stored information. For two decades, that framework has absorbed every new review technology, from keyword search to predictive coding to technology-assisted review, largely through Sedona Principle 6 — the doctrine that the responding party is best situated to select its own methodology. Hyles v. City of New York and Livingston v. City of Chicago are the textbook applications. Courts have declined to compel use of TAR, and they have declined to compel disclosure of methodology beyond what cooperation and proportionality require.

Agentic AI stresses that framework in ways TAR did not. A predictive coding model suggests rankings; a human reviews and codes. An agent that performs privilege triage, generates reasoning, and queues production-ready recommendations is operating closer to the lawyer’s role than to the lawyer’s spreadsheet. When an agent chooses which documents to prioritize, which to flag, and which to set aside, the reviewing human inherits a narrowed universe shaped by a decision tree the opposing party cannot inspect. Maura R. Grossman, whose foundational research originally validated the use of algorithms in discovery, was a featured faculty member for the Sedona Conference’s Working Group 13 Annual Meeting held April 9-10 in Austin. Her presence underscores the growing consensus among the practitioners the plaintiffs’ and defense bars watch for guidance on how cooperation and methodology disclosure should adapt to the ‘black box’ of autonomous agency. The Sedona Conference itself has four active drafting projects on AI governance, ethics, and regulatory crosswalks, according to a March 2026 webinar preview of the Austin meeting.

Under this emerging standard, the “meet and confer” is no longer just about file types and date ranges; it is about the agency of the software. Litigants are now increasingly expected to disclose not just that they are using AI, but how those agents are contractually bound—specifically regarding the prohibition of training on inputs and the mandatory deletion of data. If the software is making decisions that a human used to make, the disclosure threshold has been met, and the parameters of that agent’s “employment” in the case must be transparent to the court and opposing counsel.

The Rule 26(f) threshold question, then, is not whether agentic AI must be disclosed as a general matter. It is whether the agent’s decision-making so materially shapes what gets produced that its existence becomes a “subject on which discovery may be needed” under Rule 26(f)(3). Emily Fedeles Czebiniak, an attorney at eDiscovery service provider TCDI who attended the April 2026 Austin meeting, said in an April post-event commentary that the conference’s focus had shifted from whether firms could obtain advanced models to whether they could govern, validate, and show their work with them. Organizations, she wrote, are now evaluated by how well they can demonstrate that discipline in regulatory or judicial settings.

Showing the work is where Sovereign AI enters the conversation. Data sovereignty — the principle that specific data must remain under a specific jurisdiction’s legal authority, not just in a specific physical location — has moved from a European compliance concern to a U.S. litigation concern in recent months. McKinsey, which has a consulting practice built around sovereign AI advisory, estimated in March 2026 that 30 to 40 percent of global AI spending could be influenced by sovereignty requirements, representing a market of $500 billion to $600 billion by 2030. The EU AI Act’s high-risk system obligations are scheduled to become binding on August 2, 2026, with penalties reaching 7 percent of global turnover, although the European Commission’s Digital Omnibus package has proposed delaying certain Annex III obligations into late 2027. A prudent compliance posture is to treat the August date as binding. Inside U.S. litigation, the Morgan protective order requirements — no training use, no onward disclosure, deletion on demand — are practical sovereignty conditions dressed in discovery clothing. A consumer AI platform that routes prompts through mixed jurisdictional infrastructure cannot meet them.

The cross-border exposure compounds the problem. In any U.S. matter involving EU-resident custodian data, an agentic review pipeline that sends prompts to U.S.-hosted infrastructure collides with the CLOUD Act on one side and GDPR international-transfer restrictions on the other. The Sedona Conference’s Working Group 6 convened in Berlin in February 2026 on precisely these cross-border discovery and data-protection tensions. Sovereign AI architectures — EU-operated providers, confidential compute, customer-managed keys — exist in part to resolve them, but they do not resolve themselves. Counsel still has to choose the architecture before the meet-and-confer, not after.

The same logic reaches beyond litigation. When an organization responds to a ransomware incident or a credential-exposure breach in 2026, agentic AI is frequently the triage engine that parses credential dumps, reconstructs attacker timelines, and drafts notification drafts for privacy counsel to finalize. That processing creates its own agentic footprint, and when the breach produces civil litigation — consumer class actions, regulator inquiries, shareholder suits — the Rule 26(f) disclosure question lands on the incident-response record as surely as it lands on the review workflow. Cybersecurity and information-governance teams who have pushed autonomy into their response pipelines need to anticipate how that autonomy gets described at a future meet-and-confer.

Corporate legal operations and outside counsel should treat three questions as live at every Rule 26(f) conference going forward, and as live at any incident-response postmortem that may mature into litigation. First, does the agentic system in use produce a reconstructable audit trail that shows what each agent decided and why? Second, can the vendor contract satisfy the Morgan conditions on confidential material? Third, if opposing counsel asks which AI tools are in the workflow, is the answer ready, or will it be assembled under time pressure after an objection?

The efficiency gains vendors describe are real. Exterro’s April 2026 vendor guidance claims review-timeline compression, with settle-or-litigate windows narrowing to the opening days of a matter rather than the opening weeks. The ethical floor is equally real. ABA Formal Opinion 512, issued July 29, 2024, confirmed that a lawyer’s duty of competence extends to AI and that outputs require independent verification proportionate to the stakes. NIST’s AI Risk Management Framework and the EU AI Act both expressly require human oversight at high-risk decision points. The question is operational, not philosophical: can the lawyer defend the loop?

Judge Braswell’s opinion hints at the answer. She required Morgan to disclose the tool not because the tool itself revealed strategy, but because V2X needed to know what had touched its confidential information to assess exposure. That logic — disclosure keyed to risk, not to principle — scales cleanly to agentic systems. If an agent has made decisions a reasonable opposing party would want to assess, the disclosure hook is already there in Rule 26(f)(3).

The Sedona Conference’s drafting projects on AI governance, ethics, and regulatory crosswalks were still in progress when the April Austin meeting adjourned. Data sovereignty-aware infrastructure is still being built. The courts are moving faster than the consensus commentary. Practitioners who wait for the commentary to catch up will be advising clients on the back foot.

Where does a discovery team draw the line between an AI-assisted workflow that does not trigger a Rule 26(f) disclosure conversation and an agentic workflow that does?

News Sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.