On October 24, 2024, the White House released a memorandum (the “Memo”) implementing Executive Order 14110 (“EO”), titled “Safe, Secure, and Trustworthy Artificial Intelligence,” which was issued in October 2023. The EO outlined a comprehensive, all-of-government approach to developing an AI governance framework. The Memo provides further directives related to AI governance, particularly in the national security context.
EO 14110 directed agency action in a range of areas related to AI, including competition and innovation, safety and security, consumer protection, workers’ issues, privacy, equity and civil rights, U.S. leadership abroad, and responsible government use of AI. The Memo builds on these themes and outlines three main objectives [1] advancing the U.S.’s leadership in AI; [2] harnessing AI to fulfill national security objectives; and [3] fostering the safety, security, and trustworthiness of AI. The Memo directs a number of agencies, including DOD, DHS, DOE, DOJ, CFIUS, NIST and others to achieve the Memo’s objectives. Highlights include the following:
- S. Leadership in AI: Ensure the U.S. remains the top location for global AI talent and computing facilities while protecting U.S. AI from foreign intelligence threats.
- Promote progress, innovation and competition: The Memo directs agencies including DOD, DHS, NSF and DOE to take actions such as streamlining the visa process and fostering investment in AI infrastructure.
- Protect industry, civil society, and academic AI intellectual property and infrastructure from foreign intelligence threats: The Memo directs the NSC and agencies including ODNI, DOD, DOJ, CFIUS and others to take actions such as reviewing the national security framework doctrine, identifying vulnerabilities in the AI supply chain, and considering AI implications for covered transactions.
- Manage risks. Continue to develop international AI governance with a range of partners. The Memo directs NIST and Commerce (via NIST’s AI Safety Institute) to take the lead in facilitating and providing guidance on AI testing programs and methods.
- AI and National Security: Responsibly harness AI’s power to meet national security objectives.
- Enable effective and responsible use of AI. The Memo directs various agencies to develop and attract AI talent and encourage adoption of AI. The Memo also establishes a working group to address AI procurement issues, a directive with the potential to significantly impact the trajectory of AI across industries.
- Strengthen AI governance and risk management. The Memo directs relevant agencies to consider specific risks directly related to each agency’s use of AI and establish an AI Framework for the national security community.
- Safe, Secure, Trustworthy AI: Continue to develop a “stable and responsible” framework for global AI governance. The Memo directs the State Department, in coordination with DOD, the Department of Commerce, DHS, the U.S. Mission to the UN, and USAID to develop a strategy for engaging with global actors on AI governance in accordance with existing frameworks.
The Memo also details the appropriate use of AI in government, with a range of directives related to assessment and reporting across agencies with a classified annex that covers other sensitive national security issues like countering adversary uses of AI. The Memo states that the recent “paradigm shift” in AI toward large language models and “computationally intensive systems” primarily has occurred outside of government to date, but that it is critically important for the government to assume a key role in AI governance and innovation.
With the first of the Memo’s many directives due within 30 days, and others following at various intervals, the U.S. government’s role in AI governance and risk management will continue to take shape over the coming year.