LexisNexis CEO Says AI Law Era Already Here as Protégé Drafts Court Documents

TL;DR: LexisNexis CEO Sean Fitzpatrick describes the legal research company’s evolution from database provider to AI-powered drafting platform with Protégé. The system uses multi-model agentic AI backed by 160 billion documents, employs attorneys to review outputs, and addresses accuracy concerns following multiple cases of lawyers sanctioned for submitting hallucinated case citations.

Sean Fitzpatrick, CEO of LexisNexis North America, UK, and Ireland, characterises his company as “an AI-powered provider of information, analytics, and drafting solutions for lawyers”—a significant departure from its traditional identity as the legal profession’s foundational research database.

In an interview with The Verge’s Nilay Patel for the Decoder podcast, Fitzpatrick detailed how LexisNexis has transformed from simple research infrastructure into a system that drafts actual legal documents lawyers submit to courts.

From Database to Drafting Platform

LexisNexis launched its Protégé AI tool as part of the Lexis+ AI platform in 2023, marking a cultural shift from providing case law databases to actively generating legal work product.

Evolution Timeline:

  • Pre-2020: Traditional research database for case law and legal precedents
  • 2020: Launch of Lexis+ integrated ecosystem
  • 2023: Launch of Lexis+ AI with drafting capabilities
  • 2025: Protégé now described as fastest-growing product in company history

The transformation leverages falling token costs—from $20 per million tokens two years ago to 10 cents today—enabling speed and scale previously impossible.

Multi-Model Agentic Architecture

Protégé employs sophisticated model orchestration rather than relying on a single foundational model:

Technical Approach:

  • Planning agent receives and analyses queries
  • Tasks allocated to specialised models based on requirements
  • OpenAI o3 deployed for deep research tasks
  • Claude 3 Opus used for document drafting
  • Model-agnostic approach selecting best tool for each function

Fitzpatrick explained the system’s operational logic: “When someone puts in a query—let’s say they wanted to draft a document—the query will go into a planning agent, who will then allocate that query out to other agents. It needs to do some deep research, so maybe it uses OpenAI o3 because it’s really good at deep research. At the end, it needs to draft a document, so maybe it uses Claude 3 Opus, which is really good at drafting.”

Addressing the Hallucination Crisis

The legal profession has experienced multiple high-profile failures from AI hallucination, with lawyers sanctioned for submitting briefs containing fabricated case citations and courts rescinding opinions containing hallucinated references.

LexisNexis Countermeasures:

  • 160 billion documents and records as grounding data
  • Citator agent verifies cases weren’t fabricated and remain good law
  • Links directly to case law within LexisNexis system
  • No fabricated cases in validated collection
  • Attorney review of outputs before deployment

Fitzpatrick acknowledged the severity: “I think somebody’s going to lose their license over this at some point. We’re seeing the sanctions start to ratchet up. A couple attorneys got fined $5,000 a piece, and then some attorneys in federal court down in Alabama got referred to the state bar association for disciplinary action.”

Attorney Review Infrastructure

LexisNexis has built substantial human oversight infrastructure—an investment Fitzpatrick characterised as unexpected:

“The biggest thing I’ve learned is how important it is to have attorneys doing that work. I expected to hire a lot of technical people and data scientists to do this work. I didn’t really expect to hire an army of attorneys.”

Review Process:

  • Attorneys with practice area expertise review outputs
  • Mergers and acquisitions documents reviewed by M&A specialists
  • Reviewers identify missing elements and quality issues
  • Feedback loop informs model training adjustments
  • Attorney-reviewed outputs positioned as “courtroom-grade”

The interview highlighted concerns about AI disrupting the traditional apprenticeship structure of legal training. Junior associates historically learned by performing research and drafting tasks that AI now automates.

Fitzpatrick acknowledged the challenge: “If you start to take some of the layers out of the bottom, how does everyone skip the bottom layer and still make it to the second with the same capabilities and skills? That’s a real challenge.”

He noted firms are “struggling” with the apprenticeship question but expressed confidence the profession would adapt: “We tend to get some of the smartest and brightest people going into the legal profession, and so far, they seem to have figured out every challenge that’s faced the industry.”

Patel challenged Fitzpatrick on the use of AI for originalist legal interpretation—a judicial philosophy requiring laws be interpreted based on their meaning at enactment. Conservative judges have increasingly referenced “corpus linguistics” and AI analysis to determine historical word meanings.

Patel cited Trump-appointed Judge John Bush: “To do originalism, I must undertake the highly laborious and time-consuming process of sifting through all this. But what if AI were employed to do all the review of the hits and compile statistics on word meaning and usage? If the AI could be trusted, that would make the job much easier.”

When pressed on whether LexisNexis bears responsibility for how judges use its tools to reach controversial outcomes, Fitzpatrick compared the product to a “brick”—neutral infrastructure that could be used constructively or destructively.

Live Demonstration:

During the interview, Fitzpatrick tested Protégé on a politically charged question: “Does the 14th Amendment guarantee birthright citizenship or are there exceptions?”

The system returned a conventional legal answer recognising birthright citizenship with narrow exceptions (children of diplomats, enemy forces, etc.), noting “recent cases have affirmed this interpretation rejecting attempts to expand the exceptions of birthright citizenship.”

This response represents the current legal consensus, though the Trump administration is actively challenging birthright citizenship—likely through originalist arguments potentially supported by AI-powered historical linguistics analysis.

Neutrality Claims and Platform Responsibility

Fitzpatrick repeatedly positioned LexisNexis as a neutral tool provider following “responsible AI principles” including:

  • Transparency through “opening the black box”
  • Human oversight in product development
  • Privacy and security protections
  • Prevention of bias introduction

When pressed on whether the company would refuse to generate arguments for certain positions, Fitzpatrick stated: “I don’t know that we would want to necessarily restrict the AI in that way. We’re referring back to the information that we have, which is our authoritative collection of documents and materials that helps lawyers understand what the facts are, what the precedent is, and what the background is.”

This neutrality framing echoes technology industry arguments Patel characterised as historically inadequate: social media platforms claiming to be neutral whilst enabling substantial harms, AI companies dismissing copyright concerns, and other instances where “neutral tool” positioning preceded significant negative consequences.

Business Performance and Investment

Fitzpatrick confirmed Protégé is profitable despite substantial AI investment:

  • Described as “fastest growing product” in company history
  • Growth rate accelerated since AI launch
  • Customer satisfaction increasing
  • Profit growing (most investment directed at AI tools)
  • “Largest and most robust backlog of projects” in company history

The company operates within RELX, a publicly traded parent with four divisions. LexisNexis constitutes the Legal and Professional division.

Looking Forward

Fitzpatrick’s vision centres on personalised AI assistants for every attorney, understanding their practice area, jurisdiction, prior work product, preferences, and style whilst accessing LexisNexis’s 160 billion documents.

He estimated 10,000 tasks could potentially be automated across different attorney types and practice areas, with the company working through that backlog based on customer prioritisation.

However, he acknowledged uncertainty about the technology’s trajectory: “If I look back two years ago, I would’ve never guessed we’d be doing what we’re doing today because the technology didn’t exist or it was too expensive to implement. That’s totally changed over the last two years, and I think over the next two years, it’s going to change again.”

Critical Questions

Apprenticeship Crisis: If junior associates no longer perform foundational research and drafting work, how will the legal profession develop senior practitioners with necessary skills and judgement?

Human in the Loop: At what point does AI assistance become AI autonomy in legal practice, and what safeguards prevent over-reliance on automated reasoning?

Judicial AI Use: Should judges and clerks be subject to different standards than attorneys regarding AI use, particularly for writing opinions that establish legal precedent?

Originalism and AI: When judges use AI to determine historical word meanings for originalist interpretation, does this outsource legal reasoning to probabilistic systems in ways that undermine judicial accountability?

Neutral Tool Defence: Is characterising AI legal tools as “neutral infrastructure” adequate given the technology’s active role in constructing legal arguments and the legal profession’s gatekeeping function in the justice system?

Bias and Precedent: If AI systems are trained on existing case law reflecting historical biases, how does “preventing bias introduction” work when the grounding data itself may contain structural prejudices?

The transformation of LexisNexis from passive research database to active participant in legal reasoning represents a fundamental shift in how legal work is performed. Whether the profession’s traditional apprenticeship model, ethical standards, and quality control mechanisms can adapt to this automation whilst maintaining the rigour essential to just outcomes remains an open question.

Source Attribution:

Share this article