On January 23, 2025, President Trump issued a new Executive Order (EO) titled “Removing Barriers to American Leadership in Artificial Intelligence” (Trump EO). This EO replaces President Biden’s Executive Order 14110 of October 30, 2023, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (Biden EO), which was rescinded on January 20, 2025, by Executive Order 14148.
The Trump EO signals a significant shift away from the Biden administration’s emphasis on oversight, risk mitigation and equity toward a framework centered on deregulation and the promotion of AI innovation as a means of maintaining US global dominance.
Key Differences Between the Trump EO and Biden EO
The Trump EO explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It criticizes the influence of “engineered social agendas” in AI systems and seeks to ensure that AI technologies remain free from ideological bias. By contrast, the Biden EO focused on responsible AI development, placing significant emphasis on addressing risks such as bias, disinformation and national security vulnerabilities. The Biden EO sought to balance AI’s benefits with its potential harms by establishing safeguards, testing standards and ethical considerations in AI deployment and deployment.
Another significant shift in policy is the approach to regulation. The Trump EO mandates an immediate review and potential rescission of all policies, directives and regulations established under the Biden EO that could be seen as impediments to AI innovation. The Biden EO, however, introduced a structured oversight framework, including mandatory red-teaming for high-risk AI models, enhanced cybersecurity protocols and monitoring requirements for AI used in critical infrastructure. The Biden administration also directed federal agencies to collaborate in the development of best practices for AI safety and reliability efforts that the Trump EO effectively halts.
The two EOs also diverge in their treatment of workforce development and education. The Biden EO dedicated resources to attracting and training AI talent, expanding visa pathways for skilled workers and promoting public-private partnerships for AI research and development. The Trump EO, however, does not include specific workforce-related provisions. Instead, the Trump EO seems to assume that reducing federal oversight will naturally allow for innovation and talent growth in the private sector.
Priorities for national security are also shifting. The Biden EO mandated extensive interagency cooperation to assess the risks AI poses to critical national security systems, cyberinfrastructure and biosecurity. It required agencies such as the Department of Energy and the Department of Defense to conduct detailed evaluations of potential AI threats, including the misuse of AI for chemical and biological weapon development. The Trump EO aims to streamline AI governance and reduce federal oversight, prioritizing a more flexible regulatory environment and maintaining US AI leadership for national security purposes.
The most pronounced ideological difference between the two executive orders is in their treatment of equity and civil rights. The Biden EO explicitly sought to address discrimination and bias in AI applications, recognizing the potential for AI systems to perpetuate existing inequalities. It incorporated principles of equity and civil rights protection throughout its framework, requiring rigorous oversight of AI’s impact in areas such as hiring, healthcare and law enforcement. Not surprisingly, the Trump EO did not focus on these concerns, reflecting a broader philosophical departure from government intervention in AI ethics and fairness – perhaps considering existing laws that prohibit unlawful discrimination, such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act, as sufficient.
The two orders also take fundamentally different approaches to global AI leadership. The Biden EO emphasized the importance of international cooperation, encouraging US engagement with allies and global organizations to establish common AI safety standards and ethical frameworks. The Trump EO, in contrast, appears to adopt a more unilateral stance, asserting US leadership in AI without outlining specific commitments to international collaboration.
Implications for the EU’s AI Act, Global AI and State Legal Frameworks
The Trump administration’s deregulatory approach comes at a time when other jurisdictions, particularly the EU, are moving toward stricter regulatory frameworks for AI. The EU’s Artificial Intelligence Act (EU AI Act), which was adopted by the EU Parliament in March 2024, imposes comprehensive rules on the development and use of AI technologies, with a strong emphasis on safety, transparency, accountability and ethics. By categorizing AI systems based on risk levels, the EU AI Act imposes stringent requirements for high-risk AI systems, including mandatory third-party impact assessments, transparency standards and oversight mechanisms.
The Trump EO’s emphasis on reducing regulatory burdens stands in stark contrast to the EU’s approach, which reflects a precautionary principle that prioritizes societal safeguards over rapid innovation. This divergence could create friction between the US and EU regulatory environments, especially for multinational companies that must navigate both systems. Although the EU AI Act is being criticized as impeding innovation, the lack of explicit ethical safeguards and risk mitigation measures in the Trump EO also could weaken the ability of US companies to compete in European markets, where compliance with the EU AI Act’s rigorous standards is a legal prerequisite for EU market access.
Globally, jurisdictions such as Canada, Japan, the UK and Australia are advancing their own AI policies, many of which align more closely with the EU’s focus on accountability and ethical considerations than with the US’s pro-innovation stance under the Trump administration. For example, Canada’s Artificial Intelligence and Data Act emphasizes transparency and responsible development, while Japan’s AI guidelines promote trustworthy AI principles through multistakeholder engagement. While the UK has a less regulated approach than the EU, it has a strong accent on safety through the AI Safety Institute.
The Trump administration’s decision to rescind the Biden EO and prioritize a “clean slate” for AI policy also may complicate efforts to establish global standards for AI governance. While the EU, the G7 and other multilateral organizations are working to align on key principles such as transparency, fairness and safety, the US’s unilateral focus on deregulation could limit its influence in shaping these global norms. Additionally, the Trump administration’s pivot toward deregulation risks creating a perception that the US prioritizes short-term innovation gains over long-term ethical considerations, potentially alienating allies and partners.
A final consideration is the potential for the Trump EO to widen the gap between federal and state AI regulatory regimes, inasmuch as it presages deregulation of AI at the federal level. Indeed, while the EO signals a federal shift toward prioritizing innovation by reducing regulatory constraints, the precise contours of the new administration’s approach to regulatory enforcement – including on issues like data privacy, competition and consumer protection – will become clearer as newly appointed federal agency leaders begin implementing their agendas. At the same time, states such as Colorado, California and Texas have already enacted AI laws with varying scope and degrees of oversight. As with state consumer privacy laws, increased state-level activity in AI also would likely lead to increased regulatory fragmentation, with states implementing their own rules to address concerns related to high-risk AI applications, transparency and sector-specific oversight.
Thus, in the absence of clear federal guidelines, leaving businesses with a growing patchwork of state AI regulations will complicate compliance across multiple jurisdictions. Moreover, if Congress enacts an AI law that prioritizes innovation over risk mitigation, stricter state regulations could face federal preemption. Until then, organizations must closely monitor both federal and state developments to navigate this evolving and increasingly fragmented AI regulatory landscape.
Ultimately, a key test for the Trump administration’s approach to AI is whether it preserves and enhances US leadership in AI or allows China to build a more powerful AI platform. The US approach will undoubtedly drive investment and innovation by US AI companies. But China may be able to arrive at a collaborative engagement with international AI governance initiatives, which would position China strongly as an international leader in AI. Alternatively, is DeepSeek a flash in the pan, a stimulus for US competition or a portent for the future?
Conclusion
Overall, the Trump EO reflects a fundamental shift in US AI policy, prioritizing deregulation and freemarket innovation while reducing oversight and ethical safeguards. However, this approach could create challenges for US companies operating in jurisdictions with stricter AI regulations, such as the EU, the UK, Canada and Japan – as well as some of those states in the US that have already enacted their own AI regulatory regimes. The divergence between the US federal government’s pro-innovation strategy and the precautionary regulatory model pursued by the EU and these US states underscores the need for companies operating across these jurisdictions to adopt flexible compliance strategies that account for varying regulatory standards.
Pablo Carrillo also contributed to this article.