On May 17, 2024, the Judicial Conference’s Advisory Committee on Evidence Rules released its report on artificial intelligence, which discusses the potential need for modifications to the Federal Rules of Evidence. The committee issued this report after its April 2024 meeting, which featured testimony from several AI experts. While the committee did not identify any new or amended evidentiary rules addressing AI, it noted several areas it will continue monitoring.
During the committee’s April 19, 2024, meeting, eight different experts presented on AI and machine learning, including computer scientists from the National Institute of Standards and Technology, leaders in AI regulation, and law professors.
The resulting report included four takeaways based on the experts’ presentations and the committee’s consideration of the testimony.
- Whether to amend the Federal Rules of Evidence to ensure machine learning output is reliable when not accompanied by an expert witness. The committee agreed that such a rule was worth considering, recognizing that reliability is a more concerning issue than authenticity with AI output. The committee did not consider or propose any specific language, but noted a potential solution could be a new rule applying Rule 702 reliability standards to the output. The committee also recognized the challenges in drafting such a rule.
- Whether it is necessary for a special rule to authenticate items in the age of “deepfakes.”[1] The committee did not recommend creating a special rule for authenticating items, but noted that traditional means of authentication may need modifications because of the difficulty in detecting a deepfake. Proponents of a new rule claimed that under current Rule 901(a), there is a low standard for what a party needs to provide to prove authenticity. The committee declined to recommend a new or modified rule at this time because courts have extensive years of experience in dealing with forged evidence.
- Whether a new rule is necessary to address claims an item is a deepfake. The committee noted that a party must make some initial showing that an item is a deepfake before an inquiry into the authenticity of that item. The committee again pointed to forgeries and noted that courts currently require some sort of showing before inquiring into whether digital or social media evidence has been hacked. The committee declined one of the expert’s proposals for a new procedural rule addressing the burden of proof in moving forward with an inquiry on whether an item is a deepfake. The committee recognized that in its present form, the proposed rule was too high of an initial burden, but remains open to considering a modified rule.
- Whether validation studies are necessary to introduce machine learning evidence so that courts and litigants would not need to analyze the underlying AI source codes and algorithms. The committee found that additional thought was needed as to how courts would conduct and review such validation studies.
The committee concluded its report by noting it would continue to consider whether new rules or amendments are necessary to deal with AI and machine learning evidence. As the committee recognized, rulemaking is a multiyear process, which can be a lifetime in a rapidly developing area of technology like AI.
[1] A deepfake is an AI-created realistic, but fake, photograph, audio, or video.