With the second Trump Administration set to take power in January 2025, one can expect a pendulum swing in many aspects of technology policy. For example, while it is expected that President Trump will will continue efforts by the Biden Administration to limit China’s access to advanced semiconductor technology, the SEC is expected to be much more crypto-friendly. What is unclear, however, is how the new federal government will address AI regulation.
It’s a good bet that a Trump White House will take a lighter approach to the oversight of the AI industry than the preceding Biden Administration and do little to upset the current favorable environment for AI investments and start-up companies. Regardless of the exact route forged by the new Trump Administration, the U.S. already dominates private-sector investment in AI, and the government will likely take efforts to nurture this trend.
The Fate of Biden’s AI Executive Order and Related Agency Action
In October 2023, President Biden issued his “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Fact Sheet, here) designed to spur new AI safety and security standards, encourage the development of privacy-preserving technologies in conjunction with AI training, address certain instances of algorithmic discrimination, advance the responsible use of AI in healthcare, study the impacts of AI on the labor market, support AI research and a competitive environment in the industry, and issue guidance on the use of AI by federal agencies. One of the most notable aspects of the EO is that President Biden invoked the Defense Production Act and required that “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.” As a result, in the past year, certain companies had to make initial disclosures to the U.S. Department of Commerce about the results of their red-team safety tests, plans to train powerful models, and large computing clusters they possess capable of such training. Also, in September 2024 the Department of Commerce issued a proposed rule to require the reporting of this information on a quarterly basis.
It’s been widely reported that President Trump plans to rescind the Biden AI Executive Order. As a practical matter, many of the Executive Order’s dictates involved information gathering and reporting by federal agencies and have mostly been completed (see the recent White House release). Yet, if President Trump rescinds the Executive Order, the reporting requirement to the Commerce Department would be scrapped, including any related proposed rulemaking. Moreover, certain AI-related investigations or enforcement priorities by federal agencies, including scrutiny by the FTC of AI-related data privacy issues and potential antitrust issues in the AI industry and algorithmic fairness initiatives by the EEOC, would likely be scaled back or scratched. It is also possible that certain AI agency frameworks or departmental AI safety programs might be eliminated or refocused elsewhere in the new administration. The federal government’s own AI procurement and usage policies will also likely change. This year the White House’s Office of Management and Budget (OMB) had issued M-24-10, which required agencies to implement concrete safeguards when using AI in a way that could impact Americans’ rights or safety, and M-24-18, a guidance document that established a government-wide policy to advance responsible acquisition of AI by federal agencies (policies that often indirectly lead to changes to best practices in the private sector, given the federal government’s IT purchasing power). One would expect these policies and guidance to be rewritten.
Light Touch Policy Approach Likely
Other than withdrawing the Biden AI Executive Order, President Trump did not outline any specific AI policy details during the campaign. Even with Elon Musk in the President’s inner circle – someone who has previously come out in favor of certain AI safety regulation – it remains unclear how AI policy will unfold at the federal level. It is also unclear if the shake-up of party control of Congress will affect the collective appetite for federal AI legislation.
One envisions federal agencies like the National Institute of Standards and Technology (NIST) to continue to issue influential guidance (e.g., AI Risk Management Framework Version 1.0), and the White House to foster additional public-private initiatives about certain bipartisan baseline AI issues, all without the imposition of heavy regulatory burdens.
Generally speaking, the new Trump Administration is expected to avoid “unnecessarily precautionary approaches to regulation,” as was once espoused in prior Trump policies on AI. For example, in 2019, the first Trump Administration issued Executive Order 13859 on “Maintaining American Leadership in Artificial Intelligence,” which stated: “[T]he policy of the United States Government [is] to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI.”
A 2020 OMB AI regulation guidance (issued years before generative AI entered the public’s consciousness) is probably a preview of this more hands-off approach: “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth. Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on Al innovation and growth.” The 2020 OMB memo describes a “risk-based approach” that determines which potential risks from AI present “the possibility of unacceptable harm, or harm that has expected costs greater than expected benefits.” Of course, this begs the question of who is in charge of weighing the risks and benefits and what metrics are used for that evaluation.
Thinking geopolitically, American policies on advanced technologies remain a vital national security interest. One expects the so-called “Digital Cold War,” which is the intensifying global competition, mainly between the U.S. and China, over AI advancements, to continue and the federal government to advance policies to reflect the goal of maintaining America’s role as the leader in AI technology.
Other Sources of Influence
If the federal government plays a smaller role in AI regulation, it is possible that individual states may continue to enact AI regulation, particularly California where many major AI companies are based. In the most current legislative cycle, California enacted over 17 laws covering specific generative AI uses and outcomes, including bills on deepfakes, AI watermarking, child safety, election misinformation, performers’ AI rights and training data transparency (though Governor Newsom vetoed SB 1047, a comprehensive AI safety bill). With Colorado and Utah having also passed AI laws in the past year, the industry will watch closely if other states enter the regulatory space in the absence of a major federal response.
Outside of regulation, the AI environment will also be influenced by the outcome of multiple pending lawsuits against AI developers over the issue of using copyrighted works to train generative AI models, as well as AI developers’ own internal safety testing and any revised procedures in light of the enactment of the EU AI Act.