Highlights
- Proposed Rule of Evidence 707 would subject “machine-generated evidence” to the same admissibility standard as expert testimony.
- To be admissible, the proponent of the evidence must show that the AI output is based on sufficient facts or data, produced through reliable principles and methods, and demonstrates a reliable application of the principles and methods to the facts.
- Public comment on proposed Rule 707 is open until February 16, 2026.
In November 2024, amendments to the Federal Rules of Evidence were proposed to address the use of evidence generated by artificial intelligence (AI). A prior alert discussed these proposed amendments: (1) an amendment to Rule 901 on authenticating evidence, and (2) the creation of a new Rule 707 on the admissibility standard for such evidence.
On August 16, 2025, the Committee on Rules of Practice and Procedure of the Judicial Conference of the United States issued draft amendments to 10 rules across appellate, bankruptcy, civil procedure, criminal procedure, and evidence. Among them is a revised version of proposed Rule 707, now open for public comment through February 16, 2026.
The proposed text states:
Machine-Generated Evidence: When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of rule 702(a)-(d). This rule does not apply to the output of simple scientific instruments.
Any AI output offered into evidence, even outside the testimony of an expert witness, must still meet the standard for expert testimony — namely, it must:
- Assist the trier of fact
- Be based on sufficient facts or data
- Be the product of reliable principles and methods
- Reflect a reliable application of the principles and methods to the facts
The application of the Rule 702 standards to AI-generated evidence aligns with the expert testimony standards’ intent. Evidence should not result from a black box; it should be subject to analysis, cross examination, and scrutiny. Just as an opponent can question an expert witness’s application of a methodology to the facts of the case, Rule 707 would allow the opponent of AI-generated evidence to delve into how the piece of evidence was generated. Discovery about how AI-generated evidence was created, and what prompts and other information may have been provided to an AI tool, are likely to result in battles over discoverability, the applicability of privileges like the work product privilege, and how far litigants can peer into their opponents’ usage of AI.
The Committee Note to this proposed rule highlights this point in stating:
When a machine draws inferences and makes predictions, there are concerns about the reliability of that process, akin to the reliability concerns about expert witnesses.
These concerns include misuse of an AI model, inherent bias, incomplete factual support for the output generated, and lack of transparency into how outputs were generated.
According to the Committee, the purpose of Rule 707 is to prevent the proponent of machine-generated evidence from evading “the reliability requirements of Rule 702 by offering machine output directly, where the output would be subject to Rule 702 if rendered as an opinion by a human expert.” This comment suggests that Rule 707 is intended to be an extension of the expert witness standard of Rule 702 to the context of AI-generated outputs. The focus of a court’s analysis under the proposed Rule 707 would be on the sufficiency of the AI inputs (also known as prompts), the internal processes of the AI platform, and the validity of the resulting outputs.
The proposed Rule 707 exempts simple scientific tools from the rule’s reach. Such tools include thermometers, scales, and other commonly used devices.
Takeaway
As the use of AI expands, the admissibility of its outputs in court will become a more central focus. It is likely that new discovery battles will arise over the usage of AI, including how litigants have used AI to generate evidence they seek to use in court. Courts will be asked to weigh the reliability of AI outputs that are offered into evidence at trial. The proposed new Rule 707 presents one approach to addressing this issue.