Federal Rules Update: 2024–2025 Changes Every Litigator Should Know
Five amendments to the Federal Rules of Evidence took effect on December 1, 2024. A proposed new rule on AI-generated evidence is in the public comment period right now. One state has already enacted its own AI evidence framework. And two circuit courts have issued rulings that reshape FRCP 37(e) spoliation practice.
This post covers all of it. Bookmark it, share it with your team, and come back when you need the cite.
I. FRE Amendments Effective December 1, 2024
The Supreme Court transmitted five amendments to Congress on April 2, 2024. All took effect December 1, 2024, under the Rules Enabling Act process. Here is what changed and why it matters.
New Rule 107: Illustrative Aids
This is an entirely new rule — the first addition to Article I of the FRE since the rules were enacted in 1975. Rule 107 draws a formal line between illustrative aids (presentations used to help the jury understand evidence or argument) and substantive evidence (exhibits offered to prove facts).
Think PowerPoint slides, timeline graphics, and computer animations used during opening or closing. Before Rule 107, courts handled these under an inconsistent patchwork of local rules and case law. Now there is a uniform framework. Key provisions:
- Illustrative aids are permitted unless their utility is substantially outweighed by unfair prejudice, confusion, or delay — mirroring the Rule 403 balancing test.
- Illustrative aids do not go to the jury room unless all parties consent or the court orders otherwise for good cause.
- The aid must be entered into the record whenever practicable, but not as an evidentiary exhibit.
Practice note: If you use courtroom presentations — particularly video recreations or computer simulations — you now need to clearly designate whether each item is an illustrative aid under Rule 107 or substantive evidence under the relevant admissibility rules. The distinction affects jury access, appellate review, and opposing counsel's objection strategy.
FRE 613: Prior Inconsistent Statements
Rule 613(b) now requires that a witness receive an opportunity to explain or deny a prior inconsistent statement before extrinsic evidence of that statement is introduced. This restores the common-law foundation requirement.
The prior version allowed the explanation at any point during trial. Under the amendment, the default is foundation first — but the court retains discretion to delay the opportunity or dispense with it entirely in appropriate circumstances. The amendment does not apply to opposing party statements under Rule 801(d)(2).
FRE 801(d)(2): Successor-in-Interest Statements
A new paragraph resolves a circuit split: when a party's claim or liability derives directly from a declarant, a statement that would be admissible against the declarant under Rule 801(d)(2) is now also admissible against the successor party. This is most significant in wrongful death and estate litigation, where some courts had excluded a decedent's admissions when offered against the estate.
FRE 804(b)(3): Declarations Against Interest
The amendment standardizes the corroborating circumstances requirement for statements against penal interest. Courts must now consider the totality of circumstances under which the statement was made and any independent evidence that supports or undermines it. The requirement applies regardless of whether the prosecution or the defense offers the statement — resolving another circuit split.
FRE 1006: Summaries as Substantive Evidence
Rule 1006 summaries of voluminous writings, recordings, or photographs are now explicitly substantive evidence. Courts may not instruct juries that a 1006 summary is “not evidence.” The underlying voluminous materials do not need to be separately admitted before a summary can be used, and a summary is not rendered inadmissible merely because the underlying documents have also been admitted.
The amendment also clarifies that summaries functioning purely as illustrative aids belong under new Rule 107, not Rule 1006.
II. Proposed Rules Under Consideration (2025)
Proposed FRE 707: AI-Generated Evidence
Status: Proposed — public comment period open through February 16, 2026. Not yet enacted.
The Advisory Committee on Evidence Rules approved proposed Rule 707 by an 8–1 vote on May 2, 2025. The Committee on Rules of Practice and Procedure released it for public comment in August 2025. If adopted without modification, the target effective date is December 1, 2026.
The rule addresses a specific gap: when machine-generated evidence is offered without an expert witness, but would be subject to Rule 702 (the expert testimony standard) if presented through a witness, the court may admit it only if it satisfies Rule 702(a)–(d). Those requirements are: the evidence assists the trier of fact, is based on sufficient facts or data, is the product of reliable principles and methods, and reflects reliable application of those methods.
In plain terms, Rule 707 prevents parties from bypassing the reliability standards for expert testimony by offering AI-generated output directly as evidence without a sponsoring witness. The rule does not apply to output from basic scientific instruments.
The sole dissenter — a Department of Justice representative — argued that existing Rule 702 already covers this ground. The majority disagreed, reasoning that Rule 702 by its terms applies only to testimony by an expert witness, not to standalone machine output.
Proposed FRE 901(c): Burden-Shifting for Potentially AI-Fabricated Evidence
Status: Under consideration by the Advisory Committee. Not released for public comment.
The Advisory Committee has been discussing a proposed Rule 901(c) that would create a burden-shifting framework for authenticating evidence challenged as AI-fabricated. The structure: (1) the challenger must present evidence sufficient to support a finding that the evidence was fabricated by generative AI — a bare assertion is not enough; (2) if that threshold is met, the burden shifts to the proponent to demonstrate the evidence is more likely than not authentic — a higher standard than the usual prima facie authentication standard under Rule 901.
As of the Committee's November 2025 meeting, this proposal remains on the agenda but has not advanced to the public comment stage. The Committee appears to be prioritizing Rule 707 as the primary vehicle for AI evidence regulation, while monitoring real-world deepfake litigation before acting on 901(c).
Louisiana: First State to Enact an AI Evidence Framework
Status: Enacted. Effective August 1, 2025.
Louisiana Act No. 250 of the 2025 Regular Legislative Session (House Bill 178), signed by Governor Landry on June 11, 2025, makes Louisiana the first state with a comprehensive statutory framework for AI-generated evidence. The law passed the House 99–0.
The key provisions amend the Louisiana Code of Civil Procedure. Article 371(C) requires attorneys to exercise reasonable diligence to verify the authenticity of evidence before offering it to the court. Article 551 creates a procedure for challenging exhibits suspected of being AI-fabricated, including a disclosure standard for AI-generated evidence.
The law does not ban AI from legal practice. It establishes procedural guardrails: a duty to verify, a mechanism to challenge, and a disclosure requirement. Other states are watching.
III. Key 2024–2025 Case Law Developments
Hoffer v. Tellone, No. 22-1377 (2d Cir. Feb. 13, 2025)
The Second Circuit's first decision on the 2015 amendments to FRCP 37(e) — and it matters. The court held that sanctions under 37(e)(2) require a finding of intent to deprive another party of lost ESI, measured by a preponderance of the evidence. Negligence and even gross negligence are insufficient. The court expressly abrogated the Second Circuit's older, more permissive standard from Residential Funding Corp. v. DeGeorge Financial Corp., which had allowed sanctions based on a lesser culpable-state-of-mind standard.
Takeaway: The circuits are converging. Hoffer joins the Ninth Circuit (Gregory, below), the Fourth Circuit, and others in holding that 37(e)(2) requires actual intent to deprive — not mere negligence, recklessness, or bad faith in a generalized sense.
Gregory v. State of Montana, 118 F.4th 1069 (9th Cir. Sept. 27, 2024)
The Ninth Circuit held that FRCP 37(e) is the exclusive mechanism for sanctioning spoliation of ESI. Courts may not bypass 37(e)'s intent requirement by resorting to inherent authority to impose severe sanctions. The district court had found the State acted recklessly — but not with intent to deprive — in failing to preserve surveillance footage, then used inherent authority to instruct the jury that excessive force was established as a matter of law. The Ninth Circuit reversed, vacated the verdict, and remanded for a new trial.
Takeaway: If the ESI is gone and you want severe sanctions, you must prove intent to deprive under 37(e)(2). There is no inherent-authority workaround for ESI spoliation. This decision is a significant constraint on district courts that had been using inherent authority as a fallback when 37(e)'s intent standard was not met.
Baez v. Commonwealth, Record No. 230899 (Va. Dec. 19, 2024)
The Supreme Court of Virginia addressed body-worn camera admissibility in two significant holdings. First, BWC footage is not inherently testimonial hearsay — an officer's routine actions captured on camera are not intended as assertions, so the Confrontation Clause is not automatically implicated. Second, an officer other than the one wearing the camera can authenticate the footage, provided they can testify that it accurately represents what they observed and describe the general upload process.
Takeaway: Authentication of BWC footage does not require the specific officer who wore the camera to take the stand. But this decision also underscores the importance of documented chain of custody for BWC files — the authentication threshold is low, and integrity documentation fills the gap between what a witness can testify to and what a court needs to verify.
IV. What's Next: AI Evidence Authentication
The convergence is clear. Proposed Rule 707 addresses reliability of AI-generated evidence. Proposed Rule 901(c) addresses authentication when evidence is challenged as AI-fabricated. Louisiana has already enacted a state-level framework. And the Advisory Committee is actively debating both approaches.
The academic groundwork is substantial. Daniel J. Capra — Reporter for the Advisory Committee on Evidence Rules since 1996 and the official drafter of FRE amendments — published Deepfakes Reach the Advisory Committee on Evidence Rules, 92 Fordham L. Rev. 2491 (2024), documenting the Committee's initial deliberations. Hon. Paul W. Grimm (ret.), former U.S. District Judge for the District of Maryland and now Director of the Bolch Judicial Institute at Duke Law School, has co-authored proposals with Professor Maura Grossman for amending Rule 901 to address deepfakes and has advocated for judges to serve as gatekeepers who screen potential AI fabrications before juries see them.
Judge Grimm has identified what he calls the dual crisis of AI evidence: fabricated content can be passed off as real, and real content can be dismissed as fabricated. Both outcomes undermine the fact-finding process. The current authentication standard under Rule 901 — a prima facie showing of authenticity, roughly a preponderance standard — may be inadequate when the tools to fabricate convincing evidence are freely available and improving rapidly.
Whatever form the final rules take, the direction is unmistakable: courts will demand stronger proof that digital evidence is authentic and has not been altered. Integrity verification at the point of collection — not at the point of trial — is where that proof must originate.
V. Implications for Evidence Management
The 2024 amendments and proposed AI rules share a common thread: courts are increasingly focused on the reliability and integrity of evidence before it reaches the courtroom. Rule 107 distinguishes between evidence and aids. Rule 1006 elevates summaries to substantive evidence status. Rule 707 would require reliability demonstrations for machine-generated output. And the 37(e) case law demands documented preservation processes with provable intent.
For digital evidence specifically, integrity verification at the moment of upload is no longer a best practice — it is becoming a prerequisite for admissibility. SHA-256 hash verification, timestamped access audit trail, and automated authentication certificates address the exact concerns driving these rules changes. They prove what a file contained when it was collected, who accessed it, and that nothing changed between collection and presentation.
Attested's evidence certificates incorporate SHA-256 integrity verification, automated access audit trail documentation, and viewer attribution — directly relevant to authentication challenges under both existing rules and proposed AI evidence frameworks.
Evidence Integrity Starts at Upload
Attested generates SHA-256 hash certificates with court-specific templates, automated chain of custody logs, and viewer identity watermarks at the moment of upload. Courts are increasingly requiring this documentation under both existing and proposed federal rules.