Editor’s Note: Legal innovation conferences traditionally reward novelty. FutureLaw 2026’s first day in Tallinn rewarded restraint. Across the opening remarks, a General Court keynote, a panel on regulating the regulators, and a closing argument from a former LegalZoom general counsel, the program pulled the AI conversation back from product demos and into the harder territory of constitutional law, election security, and the structural fragmentation of EU enforcement.

For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, the through-line matters. Election interference, AI-driven evidence manipulation, and inter-regulator information sharing now sit at the same table as professional ethics, supervision, and access to justice. The panel made a concrete case for a short piece of EU law unlocking data exchange across data protection, competition, financial and AI regulators — a structural reform that would reshape how regulated entities respond to investigations.

Watch what the Tallinn audience does with this framing tomorrow. The hosts asked whether the profession will adopt an exoskeleton of technology or remain in the document shop. Day Two will test which posture the room is ready to carry into its next workflow.

ComplexDiscovery OÜ is on site in Tallinn, covering FutureLaw 2026 with practitioner-focused reporting and post-event analysis for cybersecurity, privacy, regulatory compliance, and eDiscovery professionals.


Content Assessment: FutureLaw 2026 opens in Tallinn with a sharp question: who governs the governors?

Information - 93%
Insight - 92%
Relevance - 92%
Objectivity - 93%
Authority - 91%

92%

Excellent

A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "FutureLaw 2026 opens in Tallinn with a sharp question: who governs the governors?"


Industry News – Artificial Intelligence Beat

FutureLaw 2026 opens in Tallinn with a sharp question: who governs the governors?

ComplexDiscovery Staff

The opening session of FutureLaw 2026 did not start with a vendor pitch. It started with a warning. “We are moving beyond AI hype towards verifiable infrastructure,” Valentin Feklistov, the conference’s founder and CEO, told the Main Stage audience at the Port of Tallinn Cruise Terminal on May 14. The line set the tone for a Day One morning program that, across two keynotes and a regulator-heavy panel, kept returning to the same question — what happens to legal authority when the tools of legal work begin to draft, interpret, and operationalize legal obligations?

Feklistov leaned into the metaphor that has followed generative AI since 2022. He compared the technology to Mike Ross from the television series “Suits” — knowledgeable, fast, and in constant need of supervision. “We should purposely leave some friction in our legal workforce that would actually stir cognitive tension enough to keep our mind sharp and us independent of the machine,” he said. The framing — AI as multiplier, not substitute — landed as the first applause line of the morning.



From hype to infrastructure

Co-hosts Stefan C. Schicker and Damien Riehl picked up the same thread but from opposite ends of the Atlantic. Schicker — CEO of Inspiring Pioneers, partner at SKW Schwarz and chairman of Germany’s Legal Tech Verband — told the room he had spent over 25 years helping firms adapt to successive technology waves, and that law is shifting “from documents to architecture.” “For centuries, law has been a document, a judgment, a statute on the shelf that frankly nobody opened,” he said. “Sometimes a decision is already made before the lawyer even sees the document.”

Riehl, a co-host of the conference and a solutions champion at Clio whose career has spanned legal practice and software development, traced the inflection points more concretely. “The world shifted in November of 2022 and then the world shifted again this past December of 2025,” he said, pointing to the move from chatbots to working agents. He cited an access-to-justice statistic that would recur through the day: 92 percent of legal needs in the United States go unmet, he said, because lawyers are too expensive.


A keynote from the bench

Pēteris Zilgalvis, a judge at the General Court of the European Union, opened the formal program with the kind of measured warning courts rarely deliver from a tech-conference stage. “Either we will manage artificial intelligence or artificial intelligence will manage us,” he said. “We cannot crawl under a rock.”

Zilgalvis cited the General Court’s Grand Chamber judgment of Oct. 2, 2024 in Ordre néerlandais des avocats du barreau de Bruxelles and Others v Council of the European Union (Joined Cases T-797/22, T-798/22 and T-828/22), in which he served on the bench. The case dismissed challenges to the EU sanctions ban on legal advisory services to Russian-government entities, while affirming the lawyer’s role as a guardian of client interest in a state governed by the rule of law — a role, he said, that does not change with the arrival of new tools. He warned against what he called “reverse adaptation,” the slow drift in which lawyers begin writing and thinking in ways the machine can interpret rather than the way humans actually reason.

He then walked the audience through the Court of Justice’s own internal AI work as he described it. Zilgalvis said the court adopted an AI strategy in 2023 and an AI ethics charter in January 2026, and that it is migrating court data into a sovereign cloud. The charter, he said, codifies principles including fairness, impartiality, non-discrimination, transparency, traceability, system integrity, confidentiality, human agency, oversight, accountability, and societal and environmental responsibility. “We should be trying to utilize the tools that do not exacerbate climate change,” he said.



Regulating the regulators

Zilgalvis stayed onstage to moderate the morning’s headline panel — “Regulating the Regulators: Lawmaking in a Tech-Driven World” — joined by Astrid Asi, Estonia’s prosecutor general and a former president of the Harju County Court; Kilvar Kessler, the former chairman of Finantsinspektsioon (the Estonian Financial Supervision Authority) and a former member of the European Central Bank’s supervisory board; and Paul Nemitz, a retired principal advisor at the European Commission whom the FutureLaw program lists as the “Godfather of the GDPR.”

Asi delivered the bluntest assessment of where the law sits today. Estonia’s election-fraud provisions, she said, were written 20 or 30 years ago and were never built for AI-driven election interference of the sort observed in Romania, Moldova and Norway. “We don’t have the law sufficient enough how to react in this kind of situation,” she said. The free-speech line, she added, complicates everything. Coordinated disinformation is not automatically illegal. “We have to have these discussions where the line goes, where is the danger enough that the democracy is under threat already.” Prosecutors, she warned, arrive after the damage is done. Election interference, she said, requires supervision in real time, not after the count.

Kessler took the question of regulator capability head on, and answered it with hard numbers. The European Central Bank’s Single Supervisory Mechanism, he said, began a technology transformation in 2020 and completed it in 2024, deploying AI tools including Heimdall for fit-and-proper evaluations of bank managers and Athena for document analysis. He named Latvijas Banka, the Latvian central bank, as the most advanced supervisor in the Baltics on AI deployment. Asked about strategic autonomy, Kessler endorsed the digital euro as a contributor to “European digital independence.”

Nemitz played the skeptic the panel needed. The author of a forthcoming book, “The Open Future and Its Enemies,” he questioned whether AI was making courts more productive at all. Submissions are getting longer, not shorter. Zilgalvis interjected from the moderator’s chair that French and Latvian colleagues had reported receiving 600-page pleadings for simple matters, “very obviously prepared by AI.” Nemitz framed the deeper problem as a structural one. EU enforcement, he said, is fragmented across data protection, competition, financial, insurance and equal-opportunity regulators, each bound by confidentiality clauses that prevent information sharing. “In this world of omnipurpose technology, we need a very short piece of EU law” allowing those regulators to exchange relevant intelligence, he said. Without it, the platform economy and AI vendors operate horizontally while regulators remain stuck in vertical silos.

Riehl, returning to the stage at the panel’s close, summarized three takeaways the conference’s session app would push to attendees: set AI submission limits, stress-test election safeguards, and demand explainable regulatory technology.



Law without lawyers, or law with fewer barriers?

The morning closed with a keynote that took the conversation directly to consumers. Chas Rampenthal, chief legal officer at Dinari and former general counsel of LegalZoom, framed his talk around two scenarios — “law without lawyers” and “law with lawyers” — and argued the dichotomy is a false one.

Rampenthal opened with a recent United States case, U.S. v. Heppner, in which a court held that a client’s exchanges with a public generative AI tool were not privileged because the system was a third party, not a lawyer or a lawyer’s agent. The ruling, he said, was “technically correct” but a “mismatch between old doctrine and new behavior.” The case remains a developing reference point for AI, privilege, and work-product analysis as courts and commentators work out its reach. Regulators, Rampenthal argued, are applying rules built for an earlier era — fax machines and document automation — to tools that change every few weeks.

His central thesis: the existing professional rules — competence, candor, confidentiality, supervision, conflicts and advertising — are already enough. “AI use is not unregulated. It is and it has been long before AI existed,” he said. The fabricated case law that has drawn sanctions in U.S. courts, he reminded the room, did not violate a new rule. “The machine didn’t violate a rule, the lawyer did.”

Rampenthal saved his sharpest argument for the unauthorized-practice-of-law debate. The comparison in many consumer scenarios, he said, is not between a lawyer and an AI tool. It is between an AI tool and nothing. “UPL has to be used as a shield to prevent consumer harm and not as a sword to decapitate competition.” If a tool reliably helps a person triage an issue, organize facts and identify when a lawyer is needed, banning it on principle ignores the access gap. He left the audience with four directives: enforce existing ethical duties, allow outcome-focused experimentation, resist overly specific prescriptive rules, and let consumers use AI responsibly.



What Day One left on the table

Schicker, closing out the session, picked up Rampenthal’s image of lawyers wearing “an exoskeleton of technology” and held it out as the working metaphor for the conference. The two days ahead, he said, would test whether the room could carry that posture into the workflows that follow.

If Day One framed a thesis, it was this: the harder problem at FutureLaw 2026 is not the technology in lawyers’ hands. It is the technology in regulators’ hands — and the laws still missing from the regulators’ shelves.

What should a profession do first when the law trails the tools by a decade?

News sources



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

 

Have a Request?

If you have information or offering requests that you would like to ask us about, please let us know, and we will make our response to you a priority.

ComplexDiscovery OÜ is an independent digital publication and research organization based in Tallinn, Estonia. ComplexDiscovery covers cybersecurity, data privacy, regulatory compliance, and eDiscovery, with reporting that connects legal and business technology developments—including high-growth startup trends—to international business, policy, and global security dynamics. Focusing on technology and risk issues shaped by cross-border regulation and geopolitical complexity, ComplexDiscovery delivers editorial coverage, original analysis, and curated briefings for a global audience of legal, compliance, security, and technology professionals. Learn more at ComplexDiscovery.com.

 

Generative Artificial Intelligence and Large Language Model Use

ComplexDiscovery OÜ recognizes the value of GAI and LLM tools in streamlining content creation processes and enhancing the overall quality of its research, writing, and editing efforts. To this end, ComplexDiscovery OÜ regularly employs GAI tools, including ChatGPT, Claude, Gemini, Grammarly, Midjourney, and Perplexity, to assist, augment, and accelerate the development and publication of both new and revised content in posts and pages published (initiated in late 2022).

ComplexDiscovery also provides a ChatGPT-powered AI article assistant for its users. This feature leverages LLM capabilities to generate relevant and valuable insights related to specific page and post content published on ComplexDiscovery.com. By offering this AI-driven service, ComplexDiscovery OÜ aims to create a more interactive and engaging experience for its users, while highlighting the importance of responsible and ethical use of GAI and LLM technologies.