Editor’s Note: Day 2 of FutureLaw 2026 in Tallinn took the conference’s earlier framing and put a price tag on it. Uwais Iqbal’s keynote on hard truths from a production AI workbench used by tenancy-deposit-protection adjudicators reframed how legal AI should be measured — not by model benchmark scores but by whether users will choose to use the tool tomorrow. The Productizing Legal Services panel, anchored by Chas Rampenthal’s reading of Texas Opinion 705, named the billable hour as the variable now under pressure. Around them, panels on predictive analytics, legal design, hybrid adjudication, and transnational platforms filled in the operational architecture for what comes next.
For cybersecurity, data privacy, regulatory compliance, and eDiscovery professionals, the consequential threads here run through procurement, vendor risk, AI-output reviewability, and the structural pricing of legal services. Texas Opinion 705 has direct implications for how legal-service providers bill on AI-assisted matters. Iqbal’s hard truths map directly to procurement language around audit trails, observability and human-in-the-loop design. The closing day’s hybrid-justice and transnational-platform conversations sketch the next decade of cross-border practice.
ComplexDiscovery OÜ was on site in Tallinn, covering FutureLaw 2026 with practitioner-focused reporting and post-event analysis for cybersecurity, privacy, regulatory compliance, and eDiscovery professionals.
Content Assessment: FutureLaw 2026 closes: hard truths, the billable hour, and what gets built next
Information - 94%
Insight - 93%
Relevance - 92%
Objectivity - 93%
Authority - 94%
93%
Excellent
A short percentage-based assessment of the qualitative benefit expressed as a percentage of positive reception of the recent article from ComplexDiscovery OÜ titled, "FutureLaw 2026 closes: hard truths, the billable hour, and what gets built next."
Industry News – Artificial Intelligence Beat
FutureLaw 2026 closes: hard truths, the billable hour, and what gets built next
ComplexDiscovery Staff
FutureLaw 2026 closed in Tallinn with the conversation getting harder. Where Day 1 framed who should govern AI and who should build with it, Day 2 pulled the discussion onto two fault lines the legal industry has not yet resolved: the billable hour and the gap between AI demos and AI in production. Uwais Iqbal stood up before lunch and presented hard truths from 23,000 AI-assisted legal decisions. Chas Rampenthal sat on an afternoon panel and argued the billable hour is breaking under the weight of generative AI’s productivity. Around those two sessions, Day 2 panels on predictive justice, legal design, hybrid adjudication, and transnational legal platforms filled in the operational picture.
Hard truths from 20,000 adjudications
Uwais Iqbal, founder of the London-based legal AI consultancy Simplexico, used his pre-lunch Day 2 keynote — “In the Loop: Hard Truths from 20,000 AI-Assisted Legal Decisions” — to walk the room through a production system that has now run for over 20 months. The deployment: an AI workbench for the adjudication team at one of the United Kingdom’s leading tenancy-deposit-protection schemes, processing about 15,000 disputes per year under the United Kingdom’s Housing Act of 2004. The numbers Iqbal showed: 23,000 cases processed in production, 27 adjudicators using the platform, and a 28-day government-stipulated turnaround that the team was previously struggling to meet.
Iqbal’s five hard truths cut against several pieces of received wisdom in legal AI. First, user experience design beat model performance — moving to a frontier model did not move usage; introducing a specific drafting-and-redrafting workflow did. Second, the chat box is the wrong default for most legal work; structured product surfaces beat freeform prompting when stakes are high and decisions are repetitive. Third, 100 percent human review is a feature, not a bug. “Every single user said they’re reviewing AI output 100 percent of the time,” he said, contrasting that fidelity with the limited reviewability of agentic, multi-step workflows. Fourth, trust is a property of the user interface, not of the model — source highlighting, structured feedback, decision provenance and confidence scoring produce trust where model upgrades alone do not. Fifth, the success metric for legal AI is not accuracy or benchmark performance. “The success metric…is whether users would choose to use the tool tomorrow,” he said. In the survey, 77 percent of the adjudicators said they would not go back to working without it; 23 percent said they would.
The talk read as an antidote to the agentic-AI exuberance that has dominated several recent conferences. Iqbal’s argument lands on a sober procurement standard: build for the user’s daily workflow, expose the AI’s reasoning, accept that humans will need to review every output, and measure adoption rather than benchmark scores.
The billable hour cracks
If Iqbal’s keynote was Day 2’s hard-truths anchor, the post-lunch “Productizing Legal Services” panel was the billable-hour reckoning. Alexander Irschenberger, founder and CEO of the Danish legaltech company Legal Tekno, moderated a panel with Mariana Hagström (founder and CEO of the Estonian contract-management platform Avokaado), Chas Rampenthal (chief legal officer at Dinari and former general counsel of LegalZoom), and Nicolás Lozada Pimiento (CEO and founder of the Colombian arbitration-and-litigation platform Redek).
Rampenthal made the central argument. The Professional Ethics Committee for the State Bar of Texas, in Opinion 705 issued in February 2025, advised that lawyers using generative AI in hourly matters may charge for actual time spent using, refining, and checking AI outputs, but may not bill clients for time “saved” by the tool. The opinion is advisory, not binding on the Supreme Court of Texas, but it provides a practical ethics framework Texas lawyers should consider when using generative AI in hourly matters. “You can use AI in your practice, but if you save a lot of time on it, you can’t charge the customer for that saved time,” Rampenthal told the room. The consequence, he argued: “Wait a minute. If you can’t charge for the saved time, then how do you make money using AI? Well, you get rid of the [billable] hour.” Texas may not have intended this outcome, he said, but the effect will be that flat-fee pricing replaces hourly billing for AI-assisted work where the efficiency gains are material.
Lozada Pimiento pushed the frame harder, calling the billable hour a regional problem rather than a structural one. “This thing about billable hours is more of a first-world problem,” he said. “In Colombia and in many other Latin American countries, we usually charge lump sums and that’s been the practice for decades.” Doctors do not bill by the hour, he noted. Neither do most other professional services. The cultural exception is the legal profession in the United States and parts of Europe.
Rampenthal then delivered the panel’s most quotable line on the profession’s pricing self-image. “A lot of lawyers, they market themselves as problem solvers,” he said. “Every great lawyer is also a problem finder, right? You think you have one problem. ‘Oh, Mr., you have seven and only I can help…’ I am the problem creator and the problem solver. That’s how our profession has been.” The model, he argued, cannot scale to reach beyond the top 8 percent of businesses and individuals without changing. Hagström added a structural observation: most lawyers do not know how to price anything except their own hour, because that is the only pricing they were taught. Pulling away from hourly billing, the panel agreed, requires productizing — taking the work lawyers do every day and doing it the same way every day, then pricing the result rather than the input. Rampenthal cited LegalZoom’s roughly $800 million projected 2026 revenue as evidence that productized legal services can scale.
Predictive justice meets the rule book
The pricing conversation did not arrive without context. Earlier in the Day 2 morning, the “Litigation Analytics & Predictive Justice” panel had set the legal-data backdrop that makes the productivity question possible in the first place. Damien Riehl, co-host of the conference and solutions champion at Clio, told the room that downloading the United States federal judiciary’s published opinions and motions from PACER costs approximately $2 billion. “To download all the judicial opinions and motions and briefs from Pacer would cost $2 billion with a B,” he said. He framed the access problem as a 21st-century version of pre-Gutenberg scarcity: the law functions as legitimacy infrastructure for everyone, but only the institutions that can pay for the raw material can build the predictive tools.
Dr. Benedikt M. Quarch, co-founder of the German legaltech company RightNow Group and co-director of the German Legal Tech Hub, brought the regulatory counterpoint. In Germany, he said, roughly 1 to 2 percent of judgments are published, and many courts have no shared internal system for sharing decisions across judges. The EU AI Act’s Annex III treats certain AI systems used by or on behalf of judicial authorities — those that assist in researching and interpreting facts and law, applying law to facts, or supporting similar ADR functions — as high-risk, he reminded the room. The relevant provisions were drafted before ChatGPT existed and have not kept pace. Maya Markovich, vice president at the AAA-ICDR Institute and executive director of the Justice Technology Association, framed the access stratification: enterprise customers of frontier-model providers get capabilities — Anthropic’s recent agent “dreaming” feature, for example — that consumer-tier users do not, and the result is a justice-system access gap that the open-data conversation cannot solve on its own.
Beyond the feature factory
If the morning panels framed the data, the afternoon framed the workflow. Day 2’s afternoon program opened with Kyle Gribben, head of digital services at Matheson LLP, moderating “Beyond the Feature Factory: Designing Legal Tech That Actually Works with Humans,” with Stefania Passera (founder and contract designer at Passera Design and co-founder of the Legal Design Alliance), Andrei Salajan (director of legal tech and innovation at Schoenherr Attorneys at Law), and Mia Ihamuotila (legal tech and design lawyer at Castrén & Snellman). Salajan made the structural argument: law firm year-end resets to zero billable hours, he said, run counter to the multi-year time investment that real prototyping culture requires. Ihamuotila reframed the deployment question as a thinking question. Lawyers, she said, excel at analytical and dogmatic thinking but need to practice systematic, hypothesis-driven and design thinking to use AI effectively. The panel converged on a posture: map workflows before optimizing them, treat lawyers as the prototype rather than the technology, and accept the discomfort that comes with operating without certainty.
Justice in a hybrid landscape
Behind the workflow conversation sat the day’s opening question — what kind of foundations does trust require? Day 2 had begun with Laura Kask, CEO at Proud Engineers and a former chief legal officer for the Estonian government’s CIO, delivering the keynote “Architecting Trust: Digital Sovereignty and Resilience in a Volatile World.” Kask walked the audience through Estonia’s data embassy in Luxembourg — the world’s first such installation, opened in 2018 — and Estonia’s no-legacy IT policy under which government systems are sunset after 13 years. “Even seven is too long with a rapid technological development,” she said. She closed with the lesson from Estonia’s 2018 ID-card cryptographic flaw: the law has to be flexible enough that the next crisis can be answered without paralysis.
Maya Markovich then moderated a Day 2 panel — “The New Age of Justice: AI-Powered Court and ADR Systems in a Hybrid Legal Landscape” — with Ruth Prigoda (judge at the Tallinn Administrative Court), Quarch (returning) and Lozada Pimiento (returning). Prigoda described an Estonian court system where digital files already run to 600 pages on a typical matter and AI-generated 100-page additions arrive from self-represented parties. “If I put there 100 more, then I have 600 pages to read and then 100 pages more,” she said. Lozada Pimiento cited a 2024 Colombian Constitutional Court ruling holding that the natural human judge cannot be replaced by AI, and referred to University of Chicago research on AI and legal problem-solving. The research the panel referenced is the work of Eric Posner (Kirkland & Ellis Distinguished Service Professor at the University of Chicago Law School) and Shivam Saran, published in 2025 and expanded in 2026 — “Judge AI: A Case Study of Large Language Models in Judicial Decision-Making.” Their reported finding: GPT-5 reached the correct legal outcome in 100 percent of the test cases studied, while United States federal judges reached the correct outcome in 52 percent of cases. Riehl later clarified from the floor that his earlier on-stage characterization of the figures was incorrect — the humans, he said, were “a coin flip,” not at 60 percent.
Law without borders
From hybrid justice the morning turned to hybrid infrastructure. Knut-Magnar Aanestad, chief product officer at Saga, delivered the Day 2 keynote “Law Without Borders: The Rise of Transnational Legal Platforms.” He walked the audience through three architectural approaches to building a legal platform that works across jurisdictions: copy-paste (build deep for one market, then port), domain-bound (build for a practice area where the process travels), and process-anchored (build around the cognitive work of lawyering itself, with jurisdictional content layered on top). The third approach, he argued, is where AI changes the math. Common anatomy in how lawyering happens — fact-finding, evidence assessment, legal research — is more portable than the legal sources themselves, and once the language and content layers are abstracted, a single platform can support work in multiple jurisdictions without rebuilding the workflow each time.
Dorte Carlsson, CEO of Copenhagen Legal Tech, then moderated a closing cross-border panel with Marcus M. Schmitt (general manager of the European Company Lawyers Association), Stefan C. Schicker (co-host; CEO at Inspiring Pioneers and chairman of Germany’s Legal Tech Verband) and Karol Valencia (AI adoption and change manager at Saga). Schmitt named the structural condition: Europe’s 35-plus countries and 50-plus languages produce a doubly fragmented legal-tech market that the United States does not face. Two of the top 25 AI companies globally, he noted, are pure-play legal: Harvey and Legora — either a leading indicator of the profession’s centrality to AI’s commercial future, or a bubble. Schicker pushed back on diversity-without-standardization, pointing to the SALI legal data standard as the working example of how to keep the diversity while solving the translation problem. The exchange closed with Schmitt’s directive — “stop being afraid of doing mistakes and talk about your mistakes” — and Carlsson’s quieter one: “we should disagree and we should say it aloud when we disagree.”

What the series leaves on the table
If the three-installment arc covering FutureLaw 2026 has a single thread, it is this. The morning of Day 1 argued who governs AI. Day 1’s mid-day and early-afternoon argued who builds with it. Day 2 argued who pays for what gets built — and on what terms. Iqbal’s hard truths, the productizing panel’s billable-hour reckoning, the predictive-analytics access debate, the legal-design call for prototyping culture, the digital-sovereignty framing from Estonia, the hybrid-justice push from the Day 2 judicial panel, the transnational-platform architecture, and the cross-border collaboration call — each tackled a different facet of the same operational question. For ComplexDiscovery readers, the through-line is procedural: invest in foundations, expose AI’s reasoning, accept human review as the default, and treat the billable hour as the variable rather than the constant.
If your firm or department had to redesign one of these — pricing, infrastructure, training, or governance — first, which would it be?
News sources
- FutureLaw 2026 Speakers (futurelaw.ee)
- Uwais Iqbal — Simplexico Founder (futurelaw.ee)
- Damien Riehl — Co-Host, Clio (futurelaw.ee)
- Maya Markovich — AAA-ICDR / Justice Tech Association (futurelaw.ee)
- Dr. Benedikt M. Quarch — RightNow Group (futurelaw.ee)
- Chas Rampenthal — Chief Legal Officer at Dinari (futurelaw.ee)
- Knut-Magnar Aanestad — Chief Product Officer at Saga (futurelaw.ee)
- Texas Center for Legal Ethics — Opinion 705 (Texas Center for Legal Ethics)
- Judge AI: A Case Study of Large Language Models in Judicial Decision-Making (Eric Posner & Shivam Saran) (SAGE Journals)
- How Do AI ‘Judges’ Compare to Human Ones? (Eric Posner) (University of Chicago Law School)
Assisted by GAI and LLM Technologies
Additional Reading
- FutureLaw 2026 Day One, after lunch: from regulating AI to building with it
- FutureLaw 2026 opens in Tallinn with a sharp question: who governs the governors?
- FutureLaw 2026 Heads to Tallinn: Where Legal Innovation Meets One of Europe’s Most Captivating Capitals
- FutureLaw 2026 Preview: The Practical Path to Defensible AI in Legal Workflows
- The 2026 Event Horizon: Early Outlook for eDiscovery, AI, and European Innovation
- Data Provenance and Defense Tech: IG Lessons from Slush 2025
- Lessons from Slush 2025: How Harvey Is Scaling Domain-Specific AI for Legal and Beyond
- Kaja Kallas Warns of Democracy’s Algorithmic Drift at Tallinn Digital Summit
- The Agentic State: A Global Framework for Secure and Accountable AI-Powered Government
- When Founders Have Red Lines: Investing Beyond ROI at Latitude59
- At Latitude59, Estonia Challenges Europe: Innovate Boldly or Be Left Behind
Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.


































