HB Ad Slot
HB Mobile Ad Slot
Why Smart Lawyers Are Building AI Tools Instead of Buying Them
Thursday, January 29, 2026

The legal profession is often presented with a false choice when it comes to artificial intelligence: invest in expensive, enterprise-grade legal technology or avoid AI altogether. That binary framing misses a growing middle ground—one that allows lawyers to experiment with AI in practical, low-risk ways while developing real technological competence.

That middle ground is what AI researcher Andrej Karpathy has coined “vibe coding.”1 In simple terms, vibe coding is a conversational, goal-oriented method to building software. Instead of writing code line by line, the user (read: you or I) acts as a director, describing what they want a tool to do and refining it through natural-language feedback to an AI system.

For lawyers and legal educators, vibe coding represents a meaningful shift in the way we engage with AI. The technical barrier that once separated “users” from “builders” has largely disappeared. Today, functional, custom AI tools can be created in platforms like ChatGPT, Claude, or Gemini in under an hour, often with no coding experience at all.2

That type of accessibility raises a practical question for law firms and legal organizations: when does it make sense to purchase a vendor-supplied legal technology product, and when is it more effective to build a custom, vibe-coded solution? Understanding the buy-versus-build distinction is key to using AI strategically rather than reflexively.

I. The Case for the "Agile Sandbox"

The argument for building custom tools begins with an accurate understanding of the capabilities and limitations of their underlying large language models (LLMs). Research increasingly shows that LLMs perform at or above human levels at discrete, well-defined legal tasks.3 In early 2024, researchers found that LLMs could review invoices for compliance with billing guidelines with 92% accuracy, exceeding the 72% accuracy of experienced lawyers.4 In contract review, GPT-4 identified legal issues with 87% accuracy—matching or slightly outperforming legal professionals.5 These studies evaluated models available in 2023 and early 2024. Lawyers building tools today have access to substantially more capable systems. If earlier-generation models already matched professional performance in narrow tasks, current models are expected to perform at least as well, and often better, when applied to similarly constrained use cases.

When the base model performs at this level, the value of many enterprise legal technology products changes. Much of what firms are paying for is no longer superior reasoning, but packaging: custom user interfaces, specialized workflow structure, end-to-end integration, debiasing, and enhanced data security. Those features are critical in some contexts, but they can also impose unnecessary rigidity for firms looking to innovate rapidly.

Vibe-coded tools differ from vendor legal tech in this meaningful way because they encode lawyer judgment and case framing directly, rather than imposing predefined structures. They can be quickly customized at the subject, matter, client, lawyer, or presiding judge level, allowing maximum flexibility. This “Agile Sandbox” creates a low-cost environment where lawyers can test workflows, refine assumptions, and determine what actually works before committing to a long-term enterprise-level technology investment.

That same low-stakes testing environment is particularly well-suited to advocacy training, where lawyers can rehearse high-stakes courtroom decisions without real-world consequences.

II. Digital Advocacy: Simulation as a Use Case

Advocacy training and preparation rely heavily on simulation. Lawyers refine arguments, test theories, and build confidence by rehearsing under conditions that approximate real cases. Custom AI tools built through vibe coding align particularly well with that need.

In practice, vibe-coded advocacy tools often take the form of simulated environments: mock oral arguments, witness examinations, evidentiary decision-making exercises, jury selection, or judge-specific practice scenarios. Each tool is designed around a narrow set of assumptions and constraints that reflect the realities of a particular forum or case type. General-purpose legal technology platforms rarely prioritize that level of customization because simulation does not scale easily across users or practice areas.

Vibe coding allows advocates to build and revise simulated tools as case strategy evolves. Such tools do not replace formal training or live practice. Instead, simulation tools provide a flexible supplement that allows lawyers to rehearse advocacy decisions in context before making them in court.

III. Scaling Expertise

In addition to custom-built AI simulation scenarios, from a business operations standpoint, custom AI tools allow firms to scale their most valuable asset: human capital. A vibe-coded tool allows a firm to capture how its most prolific lawyer works (or a combination of its most highly talented lawyers) and makes that approach available firmwide at any time. The tool could mimic how that lawyer analyzes problems, prioritizes issues, gives advice, and writes. Used this way, custom AI functions both as a repository of expertise and as a practical guide for junior attorneys. The same could be done for a firm’s administrative work, where a vibe-coded AI tool can reflect the most efficient legal assistant, serving as a training reference for new hires and a centralized source for firm policies and procedures.

By contrast, commercial enterprise-level legal technology products deliver the same functionality to every customer, including direct competitors. While vendor tools allow configuration, they operate within standardized workflows and update cycles set by the provider. Vibe-coded tools operate uniquely inside the firm. They rely on internal materials, preferred methods, best practices, and accumulated experience, allowing AI to function as an extension of how the firm actually works, rather than as a general-purpose utility that the firm must conform to.

IV. Ethical Competence

Perhaps the most compelling reason to experiment with building custom AI tools is that it can ensure legal ethics do not fall victim to new technology. High-profile media coverage and corresponding sanctions involving AI-generated hallucinations have made many lawyers wary of using AI at all.6 That hesitation reflects uncertainty about how AI systems behave, not a blanket rule against their use.

Building a tool closes that gap. Designing and testing a custom workflow forces a lawyer to see how inputs shape outputs, where errors arise, and how limitations affect reliability. That experience speaks directly to the duty of technological competence reflected in ABA Model Rule 1.1, comment 8.7 A lawyer cannot supervise an AI system responsibly without understanding how it functions in practice, and hands-on tool building provides the context needed to exercise that judgment.8

V. Bridging the Law School Gap

Law students entering the profession today are already being trained to work with AI.9 Through coursework, clinics, and skills-based programs, they are learning how to evaluate what AI can do well, where it fails, and which aspects of lawyering require human judgment. In that sense, AI literacy for new lawyers is not about tools; it is about task differentiation.

Practicing lawyers face the same challenge, often without the benefit of formal training. Building custom tools offers a practical way to develop that skill. Designing a workflow forces a lawyer to decide which tasks can be delegated to a machine, which require human oversight, and which should never be automated at all. Those decisions mirror the judgments students are already learning to make in clinical and experiential settings.

The fundamental question confronting every practitioner in the age of AI is a simple one: What will I be doing three years from now that AI will not be able to accomplish?

The ability to answer that question, and to revisit it as AI technology evolves, depends on genuine AI literacy.

VI. So, Which Is It: Build or Buy?

AI literacy gains aside, there will be situations where building a custom tool is not the right choice. Time constraints, data security requirements, and the sheer number of inputs may make a vendor solution more appropriate. Vibe coding excels at discrete, specialized tasks, but it is not a substitute for full-scale practice management or enterprise systems. Below are some guidelines to consider when making the critical decision to build or buy.

Custom tools make sense when you need:

  • Rapid prototyping for experimental or evolving workflows
  • Hyper-specific functionality tied to a narrow practice area or task
  • Training and professional development, including simulations
  • Broader access to the methods of the firm’s most experienced lawyers
  • Workflows that encode firm-specific practices or preferences

Vendor legal technology may be a better choice when you need:

  • Client or case management solutions
  • Multi-user environments with role-based permissions
  • Integrations across billing, calendaring, and document management systems
  • Compliance infrastructure
  • Ongoing vendor support and maintenance

In the modern law practice, these approaches complement rather than compete. A law firm will need robust client management software. It also may need an AI avatar of its best litigator for juniors to spar with. Why not have both?

Asking and exploring “what can we build?” instead of only “what should we buy?” reflects a different relationship to technology. Lawyers who understand how AI tools are created and tested are better positioned to participate meaningfully in decisions about how AI shapes the future of legal practice.


  1. Andrej Karpathy (@karpathy), X (Feb. 2, 2025), https://x.com/karpathy/status/1886192184808149383 ("There's a new kind of coding I call 'vibe coding', where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.").
  2. Tool building requires a paid subscription to ChatGPT Plus, Claude Pro, or Gemini Advanced (typically $20/month). Platforms like Microsoft Copilot Studio offer enterprise-grade alternatives.
  3. See Daniel M. Katz et al., GPT-4 Passes the Bar Exam, 382:2270 Philos Trans A Math Phys Eng Sci (Apr. 15, 2024), https://doi.org/10.1098/rsta.2023.0254 (GPT-4 passed the Uniform Bar Exam at nearly the 90th percentile).
  4. Nick Whitehouse et al., Better Bill GPT: Comparing Large Language Models against Legal Invoice Reviewers, ARXIV 1, 12 (Apr. 2, 2025), https://arxiv.org/abs/2504.02881 (non-peer reviewed industry research).
  5. Lauren Martin et al., Better Call GPT: Comparing Large Language Models Against Lawyers, ARXIV 7 (Jan. 24, 2024), https://arxiv.org/abs/2401.16212 (non-peer reviewed industry research).
  6. See Mata v. Avianca, Inc., 22-cv-1461, 2023 WL 4114965 (S.D.N.Y. June 22, 2023) (sanctioning attorneys for submitting AI-generated brief containing fabricated cases). Other examples are running rampant all over the court system. See e.g., Benjamin v. Costco Wholesale Corp., 779 F.Supp.3d 341, 351 (imposing $1,000 sanction for citing non-existent cases); Versant Funding LLC v. Teras Breakbulk Ocean Navigation Enters., LLC, 2025 U.S. Dist. LEXIS 98418 at *21-23 (S.D. Fla. May 20, 2025) (imposing monetary sanctions ranging from $500 to $1,000 for citing hallucinated cases and also awarding attorney’s to opposing party for time spent responding to offending pleading); Ramirez v. Humala, 2025 U.S. Dist. LEXIS 91124, at *3, 5-6 (E.D.N.Y. May 13, 2025) (finding citations to nonexistent cases violates F.R.C.P. 11(b)(2) and imposing a $1,000 monetary sanction); Nguyen v. Savage Enters., 2025 U.S. Dist. LEXIS 37125, at *1-2 (E.D. Ark. Mar. 3, 2025) (imposing a $1,000 monetary sanction); Lacey v. State Farm Gen. Ins. Co., 2025 U.S. Dist. 90370, at *11 (C.D. Cal. May 5, 2025) (imposing sanctions when approximately 9 of 27 citations were incorrect in some way and ordering payment of $26,100 in fees); Johnson v. Dunn, 792 F.Supp.3d 1241, (N.D. Ala. July 23, 2025) (ordering public reprimand and disqualification).
  7. Model Rules Pro. Conduct 1.1 & cmt. [8] (A.B.A. 2023) (requiring lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.").
  8. When lawyers build their own tools, they also retain more transparent control over data flow, storage, and retention. That control directly supports duties of confidentiality and reasonable safeguards for client information. By contrast, off-the-shelf AI tools may obscure where data is processed and whether it is used for model training. Ethical competence requires not merely trusting vendor assurances, but understanding and managing those risks directly.
  9. BAR ASS'N, AI AND LEGAL EDUCATION SURVEY RESULTS 2024 (June 24, 2024), https://www.americanbar.org/content/dam/aba/administrative/office_president/task-force-on-law-and-artificial-intelligence/2024-ai-legal-ed-survey.pdf 6 (survey of 29 law schools found 83% offer AI-related curricular opportunities; response rate limits generalizability to all law schools); see also ABA TASK FORCE ON LAW AND ARTIFICIAL INTELLIGENCE, ADDRESSING THE LEGAL CHALLENGES OF AI: YEAR 2 REPORT ON THE IMPACT OF AI ON THE PRACTICE OF LAW 28-29 (Dec. 2025), https://www.americanbar.org/content/dam/aba/administrative/center-for-innovation/ai-task-force/2025-ai-task-force-year2-report.pdf (highlighting a few law school approaches).
HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from The National Law Review's Guest Contributors - NLR

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters