HB Ad Slot
HB Mobile Ad Slot
HTML Embed Code
HB Ad Slot
Very Soon, Companies Will Have to Defend Their AI’s Decisions Too
Thursday, September 5, 2024

In the last 30 years, artificial intelligence (AI) has transformed from software that could almost beat a chess world champion to today’s systems that use “language and image recognition capabilities … comparable to humans,”[i] draft entire articles in seconds (not this one!), generate images and videos nearly indistinguishable from real ones, and convincingly voice a fake Oasis reunion album.[ii]

AI’s emergence as a capable tool for business and personal use has spawned particular media attention. In turn, critiques and thought pieces have focused on exploring its use, preventing its misuse, and conceptualizing how different sectors can and should adapt to its inevitability. This is certainly the case in the legal field, as local, state, national, and international law organizations continue to showcase the possibilities of AI and how it will affect litigants and decision-makers. Key to any industry discourse is answering the question: “What does AI mean for us?”

We have heard much discussion in the legal field regarding the admissibility of AI-generated evidence, jurors’ trust in said evidence, and its current and future uses in attorney preparations and courtroom proceedings. However, less focus has been placed on what AI used in business settings will do to the fact patterns of corporate litigation. Soon enough, lawsuits concerning product liability, employment, antitrust, intellectual property, and more will begin to implicate businesses’ use of AI—an immensely powerful but largely obscure technology—in their fateful actions.

When Could AI’s Use in Business Lead to Litigation?

Authors are already up in arms about the dubious way AI systems have been “trained”—the process of feeding vast amounts of data to the algorithm, analyzing the results, and iterating accordingly[iii]—on mountains of their copyrighted work.[iv] And although it is difficult to predict exactly what forms AI will take as it further integrates with businesses, be it expanded or limited for specific needs, one can easily imagine a slew of plaintiff claims waiting to hit the pipeline:

  • AI tasked with fielding job candidates did so in a discriminatory fashion.
  • AI logistics software calculated an unsafe route, schedule, or load size, resulting in a tragic trucking accident.
  • AI diagnostics software failed to recommend a test that would have caught a patient’s fatal condition.
  • AI set anti-competitive pricing, manufactured a design defect, infringed a patent, or violated consumers’ data privacy.[v]

As we track trends in juror attitudes and overall decision-making, we have a keen interest in determining how AI’s inclusion in these classic litigation genres will interact with jurors’ views, biases, and, ultimately, verdicts. The first step to answering this question will be to carefully analyze the attitudes and experiences they develop concerning AI. In the coming years, we will see fewer jurors who have never used it and more whose lives have been changed or utterly transformed by it—for better or for worse.

How Does the Current Jury Pool Feel About AI?

Public views about AI and its implications have garnered much inquiry in recent years, with the Pew Research Center diligently tracking relevant attitudes since 2021.[vi] To get a pulse on where jurors stand now, arguably at the dawn of the AI revolution, IMS Legal Strategies also surveyed a national sample of 210 jury-eligible citizens from late 2023 to early 2024 to gauge their experiences and attitudes toward artificial intelligence.

Echoing the findings of Pew’s 2023 poll, our sample of jury-eligible individuals exhibited a solid baseline of familiarity with AI. Pew reported that 90% of its respondents have heard of AI; our own research indicated that 74% of jury-eligible respondents are somewhat (56%) or very (28%) familiar with AI and its applications. The introduction of chatbot services such as ChatGPT has undoubtedly driven much of that familiarity. Although both surveys revealed that most people have heard of ChatGPT (58% of the Pew sample and 68% of our sample), our research found that a considerably smaller percentage of people (38%) have actually used it or similar chatbot services. Granted, these numbers will likely rise, and associated attitudes will evolve, as media attention and industry adoption continue to increase awareness and accessibility.

At the same time, apprehension about the increasing use of artificial intelligence in our daily lives has seen a surge. In Pew’s 2023 poll, 52% of respondents expressed being “more concerned than excited” about AI, compared to 37% in 2021 and 38% in 2022. Our own poll landed at 41% on that measure—though, perhaps most notably, both of these most recent polls found that a mere 10% of respondents were “more excited than concerned” (the remainder reported both emotions in equal parts).

Where is this unease coming from? A portion surely stems from various reports highlighting AI’s current shortcomings (e.g., its willingness to present falsities as fact [generously dubbed “hallucinations”][vii] or its potential for discrimination[viii]), further compounded by anxiety about how it might kill jobs or otherwise encroach on employees in the workplace. Indeed, Pew found that individuals already have strong opposition to AI being involved in hiring practices, such as reviewing job applications (41% oppose, 28% favor, 30% unsure), and an even larger proportion of individuals oppose AI making final hiring decisions (71% oppose, 7% favor, 22% unsure). Though there were some areas where opposition to AI was less pronounced—including monitoring workers’ driving behavior, analyzing how retail workers interact with customers, or evaluating how well people are doing in their jobs—a negative sentiment prevails, particularly when it comes to employers’ ability to surveil employees. How corporations elect to use AI moving forward will greatly impact this outlook by shaping employees’ individual experiences and resulting attitudes.

What Does AI Mean for Defendant Corporations?

Unknowns abound as businesses consider incorporating these new technologies into their day-to-day practices. Given we are still in the nascent stages of AI’s rollout, a daunting variety of questions awaits companies that face litigation in the future. For example:

1) What role will experts play in educating the jury on the inner workings of artificial intelligence? In arguing the reasonableness of AI’s decision-making and its role in causation or a defendant’s negligence? 

How much credence will jurors lend to these types of experts? Whether in-house or external, such experts may be viewed as akin to Human Resources directors in employment litigation or Persons Most Knowledgeable (PMKs) in product liability matters. Their ability to simplify the processes and capabilities of artificial intelligence to the layperson juror may prove paramount to the defense’s position. Of course, if AI developers themselves cannot fully account for how the systems work,[ix] how can experts?

2) If a human has been removed from the equation, who will jurors believe is most responsible when an AI “fails?” Will every AI-led decision, no matter how small, require a human to sign off and shoulder responsibility for it? 

Who will jurors perceive as the “decision-maker” as far as liability is concerned? The company as a whole? The executive who instated the technology? The tech who oversees it (if any)? Jury psychology suggests that blaming an AI alone would not be a cognitively satisfying outcome—AI cannot be punished or face justice. Yet, what if the AI itself eventually becomes the most conversant party about key case issues and decisions?

3) Might the original developer of the AI system in question, or at least the party who “trained” it, serve as a convincing “empty chair” to help mitigate a defendant’s perceived fault? What contracts will we see formed between the AI developer and business customer to address potential liability?

4) Will the prevalence of powerful AI tools exacerbate juror hindsight bias issues regarding what companies could or should have done or known? To what extent will attorneys and experts, more than ever, need to help jurors keep track of what features were and were not available at the time?

5) And, of course, how will juror risk profiles change for purposes of jury selection?

In Conclusion

The fact that the questions above may only be the tip of the iceberg reflects the magnitude of the changes at our doorstep. At this point, we cannot even know all the questions worth asking about our shared future with AI, let alone have all the answers. Barring any widespread regulation regarding its use or its role in litigation, however, it is safe to say that jurors’ evolving views will set the tone as we approach a novel generation of lawsuits. As the profuse considerations about its effect on corporate litigation come into focus, we plan to conduct periodic follow-up studies for a deeper dive into how jurors might evaluate these hazy new issues of AI-related liability.

This article originally appeared in the Summer 2024 issue of USLAW Magazine. Republished with permission.

References

[i] Roser, M. (2022, December 6). The brief history of artificial intelligence: the world has changed fast – what might be next? Our World in Data. https://ourworldindata.org/brief-history-of-ai(Opens an external site)

[ii] Dunworth, L. (2023, April 18). Here’s a “lost” Oasis album – created by AI. NME. https://www.nme.com/news/music/heres-a-lost-oasis-album-created-by-ai-3432125(Opens an external site)

[iii] Chen, M. (2023, December 6). What Is AI Model Training & Why Is It Important? Oracle. https://www.oracle.com/artificial-intelligence/ai-model-training/(Opens an external site)

[iv] Reisner, A. (2023, August 19). Revealed: The Authors Whose Pirated Books Are Powering Generative AI. The Atlantic. https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/(Opens an external site)

[v] We asked ChatGPT to supplement our brainstorm, and it kindly offered these last few ideas. (OpenAI. (2024). ChatGPT 3.5https://chat.openai.com/(Opens an external site))

[vi] Faverio, M., & Tyson, A. (2023, November 21). What the Data Says about Americans’ Views of Artificial Intelligence. Pew Research Center. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/(Opens an external site)

[vii] O’Brien, M. (2023, August 1). Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP News. https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4(Opens an external site)

[viii] Wolf, Z.B. (2023, March 18). AI can be racist, sexist and creepy. What should we do about it? CNN. https://www.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters/index.html(Opens an external site)

[ix] Heaven, W.D. (2024, March 4). Large language models can do jaw-dropping things. But nobody knows exactly why. MIT Technology Review. https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins