Highlights
- The U.S.’s cautious approach to AI policy and regulation is signaled by declining to enter a foreign agreement and the withdrawal of previous framework
- A new request for information requests broad input from industry, academia, governmental, and other stakeholders
The U.S. has taken significant steps to reshape its artificial intelligence (AI) policy landscape. On Jan. 20, 2025, the administration issued an order revoking Executive Order 14110, originally signed on Oct. 30, 2023. This decision marks a substantial shift in AI governance and regulatory approaches. On Feb. 6, 2025, the government issued a request for information (RFI) from a wide variety of industries and stakeholders to solicit input on the development of a comprehensive AI Action Plan that will guide future AI policy.
As part of this initiative, the government is actively seeking input from academia, industry groups, private-sector organizations, and state, local, and tribal governments. These stakeholders are encouraged to share their insights on priority actions and policy directions that should be considered for the AI Action Plan. Interested parties must submit their responses by 11:59 p.m. ET on March 15, 2025.
Executive Order 14110 was designed to establish a broad regulatory framework for AI, emphasizing transparency, accountability, and risk mitigation. The revoked order required organizations engaged in AI development to adhere to specific reporting obligations and public disclosure mandates. The order affected a wide range of stakeholders, including technology companies, AI developers, and data center operators, all of whom had to align with the prescribed compliance measures. With the Jan. 23 Executive Order 14179, organizations must now reassess their compliance obligations and prepare for potential new frameworks that could take the place of the previous Executive Order 14110.
However, given the RFI, there is an opportunity to participate in the formation of new AI policies and regulations. The new order and the RFI seek input into AI policies and regulations directed towards maintaining U.S. prominence in AI development. Consequently, potentially burdensome requirements seem unlikely to emerge in the near term.
On the international front, the U.S. administration’s decision not to sign the AI Safety Declaration at the recent AI Action Summit in Paris further avoids potential international barriers to AI development in the U.S. This, together with the issuance of the RFI, seems to signal caution in development of an AI Action Plan that will drive policy through stakeholder engagement and regulatory adjustments.
The AI Action Plan is intended to establish strategic priorities and regulatory guidance for AI development and deployment. It aims to ensure AI safety, foster innovation, and address key security and privacy concerns. The scope of the plan is expected to be broad, covering topics such as AI hardware and chips, data centers, and energy efficiency.
Additional considerations will include AI model development, open-source collaboration, and application governance, as well as explainability, cybersecurity, and AI model assurance. Data privacy and security throughout the AI lifecycle will also be central to discussions, alongside considerations related to AI-related risks, regulatory governance, and national security. Other focal areas include research and development, workforce education, intellectual property protection, and competition policies.
Takeaways
Given these policy indications, organizations should take proactive steps to adapt to, and potentially contribute to, the evolving AI regulatory landscape. It is essential for businesses to remain aware of developments policies and engage in the opportunities to help shape forthcoming AI policies. Furthermore, monitoring international AI governance trends will be crucial, as these developments may affect AI operations within the U.S.