HB Ad Slot
HB Mobile Ad Slot
Imagining Lawyer Malpractice in the Age of Artificial Intelligence
Thursday, June 5, 2025

My dear Miss Glory, the Robots are not people. Mechanically they are more perfect than we are; they have an enormously developed intelligence, but they have no soul. 1

This was a quote at the outset of Bunce v. Visual Tech. Innovations, Inc.2 – a recent case involving a lawyer who used ChatGPT to write a legal brief containing hallucinated or fake legal cases to support the legal argument presented to the court. As the court noted, while sanctioning the offending lawyer who presented the false cases to the court, “[t]o be a lawyer is to be human, a tacit prerequisite to comply with Federal Rule of Civil Procedure Rule 11(b)(2).”3

The future is here. Generative Artificial Intelligence (GAI) is fully capable of generating legal briefs, pleadings, demand letters, and other legal correspondence.4 Just open ChatGPT and prompt it to write a demand letter involving some set of facts involving a casualty and ask it to make a demand for one million dollars. Open AI models like ChatGPT can write legal briefs, demand letters, and other legal correspondence, and legally trained GAI models can indeed write strong legal correspondence and legal briefs.

What is a legally trained GAI model and how does it work?

GAI works by using neural networks to learn patterns and structures within existing data. This allows GAI to generate new and original content based on the prompts or inputs it receives from users. In doing so, GAI effectively mimics the process of human creativity by creating something entirely new from learned information.

GAI models are trained on large sets of data using existing content. This helps the model understand patterns and relationships within that data. This training data typically covers a wide range of variations and examples within a specific domain. And so, for text data, like a legal brief, the model needs a lot of examples of legal briefs, and perhaps pleadings and legal correspondence, too.

Once a model is trained, it can generate new content by sampling from the learned patterns and creating outputs that are similar to the data used in its training but with variations. The model is then “fine-tuned” by being given “feedback” on the outputs it presents in response to the inputs it receives. It then uses this “feedback” to improve its performance by providing more accurate and relevant outputs in the future. In other words, the more training data and feedback the model receives, the better the outputs.

A legally trained model that has been trained on large data sets involving the law, such as legal briefs and legal correspondence, can generate accurate and relevant briefs and letters based on the prompts it receives. Thus, a legally trained GAI model can do things like:

  • Summarize complex legal cases.
  • Analyze contracts.
  • Identify key legal arguments.
  • Predict outcomes based on similar cases.
  • Generate efficient first drafts of legal documents.

In a world where lawyers are overworked and have too much going on in their professional lives, legally trained GAI models can make lawyers more efficient by creating first drafts of letters and briefs.

The limitations of using GAI models

It is important to understand that a GAI model is only as good as the training data and feedback it receives. If the training data contains errors, the model is effectively trained to recreate those errors when generating new content. Similarly, if the training data has biases, the outputs of the GAI model are likely to mimic those biases. If biases and errors are not weeded out through the feedback process, they can seriously compromise the outputs generated by the model. Similarly, if the user prompts are not precisely written and/or if the user is not well-trained on how to prompt the model, the results will likely not be what the user is seeking. This can undermine confidence in the GAI model. Therefore, training on how to use the GAI model effectively is critical to its usefulness.

The primary problems that lawyers have had with non-legally trained GAI models are when facts or legal cases have been hallucinated, which effectively creates “false” content in the outputs, even though it might appear true to the user.5 This generally occurs due to limitations in the training data provided to the model. In hallucinating, the model effectively makes assumptions based on the patterns it has learned from the data, even though those patterns do not apply to the context of the situation. Therefore, the false outputs are simply inaccurate assumptions, which the model does not understand as inaccurate. The hallucination may statistically fit the prompt, but lack the “real world” grounding or “common sense” that a person would use to reject this response. Stated differently, GAI models hallucinate because they lack the “soul” necessary to differentiate truth from fiction.

So, beyond understanding the GAI models hallucinate, where do the pitfalls lie for lawyers who use GAI models? They lie in three main areas: (1) a failure to understand GAI’s limitations; (2) a failure to supervise the use of GAI; and (3) data security and confidentiality concerns.

The failure to understand GAI’s limitations

A lack of understanding regarding GAI’s limitations is the primary theme in many of the reported cases involving lawyers receiving sanctions when using GAI to create legal briefs that contain false facts or cases. The use of GAI requires oversight by the attorney employing it. Even legally trained GAI models require human oversight to confirm that the generated content is accurate. At best, the output must be considered a first draft, to be reviewed and corrected before it is shared with a client, opposing counsel, or a court.

Another limitation of GAI is that the output will generally only be as strong as the prompts that generate it. To that end, it is critical that attorneys using GAI be trained on how to properly prompt the model to get the results sought. For untrained lawyers using open AI models, there is a high risk that the generated content will not be what the lawyer seeks. More importantly, an untrained lawyer may not even realize that the output contains fallacies. While GAI can make the lawyer more efficient, most lawyers using it are not interested in accepting poor work product as the cost of this greater efficiency. And certainly, false content or poor work product may lead to future legal malpractice cases for attorneys who use GAI but fail to account for or understand its limitations.

The failure to supervise

Lawyers have a professional responsibility to supervise those who work for them. If lawyers permit the use of GAI within a law firm, there must be supervision of that usage. For the same reasons that lawyers must be vigilant in their own use of GAI, they must similarly train and supervise those in their employ on the use of GAI as well.

Further, there are many legally trained GAI models available in the marketplace. If a lawyer and their firm decide to use a particular model, it is incumbent upon the lawyer to ask questions and learn about the limitations of the selected model, and train and supervise the use of the model based on those limitations.

Legal malpractice claims frequently arise when lawyers or their staff are not properly trained, resulting in errors that impact the lawyer’s work product. Like any new technology, supervision over the use of GAI is therefore critical to avoiding liability for lawyers and law firms.

The failure to protect client information

Lawyers have a professional responsibility to protect the information and secrets of their clients. When using an open AI model like ChatGPT, any information shared with the model is used by the model as training data. As such, if client information is shared with an open AI model, it is clearly being shared with the public. Even if the user attempts to be vague in the data being shared, if enough information is shared, it is possible that the model may fill informational gaps with correct assumptions and effectively correctly presume that the information is about your client, even if you have not told this to the model. In this way, it may appear that the lawyer has revealed information about a client, even if that is not actually the case.

In addition, cybersecurity is critical when using AI models, as the same security issues that impact any material containing links to the internet are present with those models. Lawyers must account for these cybersecurity risks as the standard of care adjusts to the realities of the use of AI by lawyers and requires them to protect client information in the face of those risks.

The use of closed, legally trained GAI models is an attempt to address these risks. But, at the end of the day, the lawyers who use them must take steps to ensure that vendors are following through on their promise to protect client data.

Protection of client information and secrets remains fundamental to the services offered by lawyers to their clients. And certainly, if a client’s information does become public, this presents a potentially significant risk for lawyer liability.


1 Capek, Karel, R.U.R. (Rossum's Universal Robots): A Fantastic Melodrama in Three Acts and an Epilogue 17 (Paul Selver and Nigel Playfair trans., Samuel French, Inc. 1923).

2 2025 U.S. Dist. LEXIS 36454, *1 (E.D.Pa. March 13, 2025)

3 Id.

4 But not necessarily legal research, which has led to the problematic usage of ChatGPT by attorneys.

5 The first reported case was Mata v. Avianca, Inc., 678 F.Supp.3d 443 (S.D.N.Y. 2023). Since the model, there have been more than a dozen additional reported cases of lawyers relying on hallucinated case cites and being sanctioned under Rule 11 of the F.R.Civ.P.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from Wilson Elser Moskowitz Edelman & Dicker LLP

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters