“It has become appallingly obvious that our technology has exceeded our humanity.”
This quote from Albert Einstein warns that as technology rapidly advances, human ethical oversight is required to ensure tools are used thoughtfully and appropriately. Decades later, this quote rings loud today as generative artificial intelligence (“GAI”) transforms the practice of law in ways that eclipse even the introduction of computer-assisted legal research from Westlaw and Lexis in the 1970s.
GAI has brought about revolutionary changes in the practice of law, including litigation, over the last five years, particularly since the launch of ChatGPT on November 30, 2022, and its subsequent rise. Advancements are so major and rapid that the legal profession recently witnessed the first appearance by an avatar before a court in a real proceeding.1 In a profession governed by a defined set of principle-based ethical rules, litigators making new use of the technology will likely find themselves in an ethical minefield. This article focuses on governing ethical rules and tips to avoid violations.
Brief Overview of GAI
GAI is a powerful technology that trains itself on massive datasets – typically taking the form of large language models (“LLMs”) which mimic human intelligence allowing it to perform a variety of functions seemingly as if it were a person. GAI can analyze data, produce relevant research, and generate new product, including written material, images, and video. For ease of use, GAI functions through “chatbots,” which simulate conversation and allow users to seek assistance through text or voice. The main chatbots include ChatGPT (developed by OpenAI), Gemini (Google), Claude (Anthropic), and Copilot (Microsoft). These chatbots are all public-facing, as such any member of the public can use them. Unless the functionality is switched off, queries and information shared by a user are retained and continue to train the model meaning such information loses its confidentiality. There is also non-public-facing GAI, which utilizes proprietary models that are private to the user.
The main difference between GAI and the versions of Westlaw and Lexis that lawyers today grew up using is that GAI can do the same research and much more. Responding to conversational commands, GAI can engage in human-like functions, including creating first drafts of documents and culling document sets of all sizes for relevance and privilege.
GAI Litigation Use Cases
The early days of GAI have seen five main use cases in the litigation context as set out below.
Legal Research
In little more than a quarter century, the legal profession has seen revolutionary advances in legal research. The manual and laborious practice of visiting law libraries and pulling bound case reporters gave way to technology as Westlaw and Lexis gained widespread use in the early 1990’s with the rise of personal computers. Recent years have witnessed the next technological revolution in legal research with the advent of GAI. Searches are now simpler and available to all. Whereas traditional Lexis and Westlaw (they each now have GAI versions) rely on more primitive search terms and require a paid subscription, GAI allows for conversational commands – typed or spoken – and basic versions are available free of charge. GAI can also conduct advanced queries, such as mining vast troves of data to ensure no precedent is missed.
E-Discovery / Document Review
“Technology Assisted Review” (more commonly known as “TAR”) has materially changed e-discovery over the last decade. TAR, aka predictive coding, learns from a lawyer’s tagging of sample documents and efficiently classifies the remainder of the population. The process has dramatically sped up e-discovery and has reliably assisted with relevancy and privilege determinations. Following a seminal decision by SDNY Magistrate Judge Peck in 2012,2 where he encouraged the use of predictive coding in large data volume cases, U.S. judges have regularly approved of TAR as an acceptable (and often superior) method for identifying responsive documents.
GAI is elevating TAR to the next level. Before GAI, TAR relied heavily on keyword searches and the results were only as good as the search terms. By contrast, GAI can learn context and intent, which allows it to surface relevant documents even if they do not contain identified keywords. Similarly, GAI can learn legal concepts and flag privileged material. Unlike legacy TAR systems, which need to be re-trained for each project, GAI LLM models constantly re-train themselves and stand ready for use.
Studies have found that GAI-based review can be more accurate and efficient than human review in finding relevant material. For example, a 2024 study published by Cornell University, which pitted LLMs against humans in a contract review exercise, found that LLMs match or exceed human accuracy, LLMs complete the task in mere seconds compared to hours for humans, and LLMs offered a 99.97 percent reduction in costs.3
Legal Writing
GAI can amaze at preparing first drafts of any legal document. For a simple example, ask ChatGPT to “please draft a SDNY complaint for a 10b-5 claim where the stock price dropped 10 percent after the CEO misrepresented future prospects.” First-time users will do a double take at the results—a highly workable draft that provides both structure and the start of relevant content. Studies again confirm efficiencies—with one showing GAI cut brief-writing time by 76%.4
Trial Preparation
GAI has shown great promise in assisting trial preparation through summarizing and organizing documents, such as deposition transcripts. Aside from relevance, GAI can be used to create chronologies, and zero in on key admissions and impeachment material. GAI can be used to compare the deposition transcript to prior statements by the witness or others to speedily find inconsistencies. GAI can even generate deposition transcripts in real-time allowing questioners to raise inconsistencies that will be captured in the official record, as well as to suggest real-time relevant follow-up questions.
Predictive Analytics for Trial Outcomes
Lastly, GAI can sift through vast legal data sets to help forecast how a trial may play out. It can assess legal precedent, rulings by the judge, and even jury behavior. The analysis can get micro, focusing on issues such as how often a particular judge grants motions to dismiss and what types of arguments are most persuasive to the judge. While early days, there is a reported episode of a litigation where a law firm, which was inclined to settle, won at trial following its use of a GAI tool which predicted an 80% chance of victory.5
Ethical Issues
While the benefits are immense, use of GAI is laden with risks that must be carefully managed. As Chief Justice John Roberts stated in his 2023 year-end report, which was devoted to artificial intelligence, “Any use of AI requires caution and humility.”6 To apply caution and avoid pitfalls, lawyers should be mindful of two main concerns when using GAI:
(1) While the tools are immensely powerful in applying logic to locate and analyze material, they cannot discern the truth and they can become confused leading to erroneous output – discussed below and colloquially referred to as “hallucinations,” thereby requiring close human review; and
(2) Several of the Model Rules of Professional Conduct (and state versions thereof) apply to use of GAI and need to be complied with at the risk of attorney discipline.
The Rules
No lawyer should use GAI in practice without being aware of the following Model Rules, along with relevant states’ versions, and their application to the technology. For a more fulsome review of the governing Model Rules, practitioners would be well-served to read ABA Formal Opinion 512 dedicated to GAI.7
- Competence – Rule 1.1 and ABA Comment 8 thereunder, which require that lawyers keep informed of changes in the law and its practice, including the benefits and risks associated with relevant technology.
- Client Communication - Rule 1.4, which requires sharing with the client all information that can be deemed important to the client, including the means by which a client’s objectives are to be met.
- Fees and Expenses – Rule 1.5, which requires that both be reasonable.
- Confidentiality of Client Information – Rule 1.6, which requires informed consent from clients before disclosing their information to third parties.
- Candor Toward the Tribunal – Rule 3.3, which sets forth specific duties to avoid undermining the integrity of the adjudicative process, including prohibitions on submission of false evidence.
- Responsibilities of Partners, Managers and Supervisory Lawyers – Rule 5.1, which requires such persons to take reasonable steps to ensure that all lawyers in their firm conform to the Rules of Professional Conduct.
Tips to Avoid Running Afoul
While not meant to be an exhaustive list, the following tips represent critical practice points based on early GAI experience.
- Do not learn GAI on the job
Lawyers’ first use of GAI should not be in connection with a live matter. Under Rule 1.1, lawyers have a duty to understand a technology–including nature of operation, benefits and risks–prior to use.8 As per ABA Formal Opinion 512, a “reasonable understanding,” rather than an expertise, is required.9 To satisfy this requirement, attorneys should be familiar with how a tool was trained, its capabilities (which tasks it can be used for) and its limitations (confidentiality, bias risk based on the data that was inputted, etc.) before using the tool.
A mishap example comes from Michael Cohen, President Trump’s former lawyer, who presented his lawyer with three non-existent cases that made their way into motion papers.10 Cohen obtained the cases from Google Bard, a GAI tool, and, upon realizing the mistake, informed the court that he “did not realize [Bard] … was a generative text service that … could show citations and descriptions that looked real but actually were not. Instead [he had] understood it to be a super-charged search engine.” The court, while describing Cohen’s belief as “surprising,” declined to impose sanctions finding a lack of bad faith.
- Never submit GAI-generated research to a court or adversary without human verification of all relevant points – legal and factual.
Cohen aside, there have been at least six additional hallucination cases to date where erroneous cites were submitted to courts.11 These have included citations for cases which simply do not exist12 as well as cases which do exist but do not support the proposition they are used to support.13 Accordingly, it is not enough to solely verify citations – rather, it must be confirmed that the citation stands for the point represented. These matters have involved law firms big14 and small,15 showing the risks do not only reside at small firms looking to save on the costs of Lexis and Westlaw.
These risks are real and material. A June 2024 Stanford University study found that leading legal research GAI tools hallucinate between 17% and 33% of the time.16 The consequences can be severe, ranging from professional embarrassment17 to Rule 11 sanctions to referrals to bar associations for potential discipline based on competence violations
- Promptly notify the court and adversaries of any known inaccuracy generated by GAI.
Should an error occur, it is imperative to promptly notify the court and adversaries. The duty of candor imposed by Rule 3.3 requires that lawyers correct any material false statement of law or fact made to a tribunal. Prompt remedial action can also persuade the judge to refrain from imposing sanctions for submission of the hallucination.18
- Never input client information into a GAI tool without informed client consent and an evaluation of attendant risks.
As stated clearly in ABA Formal Opinion 512 and its analysis of Rule 1.6 governing the confidentiality of client information, “[B]ecause many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.” Informed consent should come from a direct documented communication and not from statements made in boilerplate engagement letters.
Consent aside, Rule 1.1 governing competence requires an evaluation of the disclosure risks of inputting such information, including from use by the public, others within the same firm but walled off from the matter, and via cyber breaches. As a baseline, practitioners should read the Terms of Use, privacy policy and other contractual terms applicable to the GAI tool being used. Such review such focus on: the confidentiality provisions in place; who has access to the tool’s data; the cyber controls that are in place; and whether the information is retained by the tool after the lawyer’s use is discontinued. Even if the system is custom to a firm, issues can still arise if others within the firm, but not working on the matter, can inadvertently view such data.
- Never input client information into a public-facing GAI tool, period.
Client information should never be inputted into a public facing GAI tool such as Chat GPT. While Chat GPT keeps conversations private by default, its OpenAI architecture uses chat to improve model performance (i.e., for training) unless the user opts out. Risks of disclosure of client information are too great and consequence too severe (violations of ethical rules and waiver of attorney-client privilege and work product protections), such that client information should not be shared with such tools.
- Discuss with the client any use of GAI to help form litigation strategy.
Rule 1.4 requires lawyers to advise the client promptly of information which would be important for the client to receive, including the means by which the client’s objectives are to be accomplished. As such, lawyers should consult with their client prior to using GAI to influence a significant litigation decision. As stated in ABA Formal Opinion 512, “A client would reasonably want to know whether, in providing advice or making important decisions about how to carry out the representation, the lawyer is exercising independent judgment or, in the alternative, is deferring to the output of a GAI tool.” By contrast, it is unlikely that using GAI to conduct research gives rise to a consultation requirement, just as it is not the practice to do so today when using Westlaw or Lexis.
- Ensure all fees and expenses tied to GAI use are reasonable.
Rule 1.5 provides that a lawyer’s fee must be reasonable and the basis for the charges must be communicated to the client. The new terrain of GAI can lead to violations of this rule. Under ABA Formal Opinion 93-379, lawyers cannot bill for more time than worked. While GAI can speed up the completion of many tasks, lawyers may only bill for the actual time spent on such task, even if compressed. Next, as per ABA Formal Opinion 512, it is permissible for lawyers to bill for the time to input data and run queries, as well as to learn a tool that is custom to the client. By contrast, it is not permissible to bill for the time spent learning a tool that will be used broadly within a lawyer’s practice. As to expenses, as per ABA Formal Opinion 93-379, absent an agreement, a lawyer can charge a client no more than the direct cost associated with the tool plus a reasonable allocation of related overhead - no surcharges or premiums may be added. Flat fee arrangements give rise to their own considerations – as per Model Rule 1.5, the fee must be “reasonable,” which would not always be the case if GAI allowed the project to be completed in rapid fashion. Here, price adjustments may be in order. As stated in Formal Opinion 512, “A fee charged for which little to no work was performed is an unreasonable fee.”
- Managerial lawyers must take steps to ensure those at their firm comply with the rules.
To ensure compliance with Rule 5.1, law firms should establish clear policies and provide training on the appropriate use of GAI prior to allowing such use.
Conclusion
As its use takes off, GAI is likely to have several material impacts on the practice of law, including: reducing the demand for law firm associates as research, first drafts of documents, first level document review, and other traditional tasks performed by these lawyers will now be handled by computer; an increase in cost reduction pressure given clients’ greater expectation of efficiencies; and the potential for higher quality product given a broader landscape of data which can be efficiently surveyed for relevance and more lawyer time available for strategy and tactical decisions.
Increased future use of GAI will also assuredly result in more errors, hallucinations and otherwise. To guard against the consequences of missteps, practitioners should follow the tips herein as but a baseline for good practice.
1 See Larry Newmeister, An AI Avatar Tried To Argue A Case Case Before A New York Court. The Judges Weren’t Having it,” AP News, April 4, 2025 (Discussing a March 26, 2025 proceeding before the New York State Supreme Appellate Division, where a panel of judges shut down an attempt by a pro se plaintiff to show a video of his argument delivered by an avatar moments into delivery. The court scolded the attorney and required him to deliver his argument himself).
2 Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (SDNY 2012).
3 Lauren Martin, Nick Whitehouse, Stephanie You, Lizzie Catterson, and Rivindu Perera, Better Call GPT, Comparing Large Language Models Against Lawyers, Cornell University (January 24, 2024).
4 Bob Ambrogi, CaseText Study Says Its ‘Compose’ Technology Cuts Brief-Writing Time by 76%, LawSites (July 28, 2020).
5 Ashley Hallene and Jeffrey M. Allen, Using AI for Predictive Analytics in Litigation, ABA website, October 2024.
6 Chief Justice John Roberts, 2023 Year-End Report on the Federal Judiciary at 5.
7 ABA Formal Opinion 512, Generative Artificial Intelligence Tools, July 29, 2024.
8 ABA Comment 8 to Model Rule 1.1 (“[A] lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”
9 Id. at 2.
10 U.S. v. Cohen, 18-CR-602 (JMF) (SDNY March 20, 2024)
11 Sara Merken, AI “hallucinations’ in court papers spell trouble for lawyers, Reuters (Feb. 18, 2025) (citing to at least seven such cases over the prior two years).
12 See Mata v. Avianca Airlines, No. 22-CV-1461 (SDNY June 22, 2023)[INSERT PARENTHETICAL]; Wadsworth v. Walmart LLC, No. 2:23-CV-118-KHR (D. WY Feb. 24, 2025) (lawyers sanctioned for citing six cases that do not exist in a motion to dismiss); U.S. v. Cohen, 18-CR-602 (JMF) (SDNY March 20, 2024) (sanctions considered but not imposed on Michael Cohen and his attorney for citing to three cases that do not exist). Iovino v. Michael Stapleton Assoc., No. 5:21-cv-00064 (W.D. Va. July 24, 2024) (sanctions considered but not imposed on lawyer for citing two cases that do not exist).
13 Iovino, id (also citing two cases for quotes that are not found in the opinions).
14 See Wadsworth, supra note 8 (AmLaw 100 firm filed a motion citing eight cases that do not exist).
15 See, e.g., Mata v. Avianca Airlines, [cite] (two lawyers fined $5,000).
16 Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgan, Christopher D. Manning, and Daneil E. Ho, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, Stanford University (June 6, 2024).
17 Negative press aside, the lawyers in the Avianca Airlines matter were ordered to send the judge’s opinion sanctioning them – which called their filing “legal gibberish” – to the judges to whom the GAI improperly attributed the fake citations. Similarly, in the Michael Cohen matter, Judge Furman described the episode as “embarrassing” and “unfortunate.” Cohen, supra note 8.
18 Iovino v. Michael Stapleton Assoc., No. 5:21-CV-64, Transcript of Order to Show Cause Proceeding, (W.D. VA Oct. 9, 2024).