On 24 October 2024, the Biden Administration issued a National Security Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence (NSM), underscoring the United States’ commitment to leading globally in the responsible development, application, and regulation of artificial intelligence (AI). The Administration’s 30 October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence led to the creation of the NSM, which was a key deliverable.
Key Objectives in AI Advancement
The NSM identifies three objectives: (1) fostering safe and secure development of AI technologies, (2) advancing national security interests through the strategic deployment of AI, and (3) promoting a global AI governance framework rooted in transparency, human rights, and trustworthiness.
- Fostering Safe and Secure AI Development: The administration emphasizes the need to balance AI innovation with robust safety measures to prevent misuse or unintended consequences. The NSM stresses that AI systems—particularly those that impact national security—must be resilient to cybersecurity threats, operate transparently, and avoid spreading misinformation.
- Advancing National Security Objectives: To safeguard the United States, the administration aims to leverage AI’s capabilities to support intelligence, defense, and cybersecurity operations. This includes integrating AI into threat detection, strategic decision-making, and resource allocation, where AI-driven insights could enhance national preparedness and response. The NSM emphasizes that agencies must account for ethical consideration when developing AI systems that support national security and prioritize human oversight and accountability.
- Promoting International AI Governance: Recognizing AI’s global implications, the NSM advocates for an international governance framework that supports AI’s ethical development and fosters alliances with like-minded nations. The U.S. aims to work alongside allies to set global standards for AI development that support privacy, human rights, and safety. This effort will involve building partnerships to counter authoritarian AI use, ensuring AI technologies are not exploited in ways that harm democracy or exacerbate surveillance risks.
CFIUS and the Role in AI Investment Security
Of particular note to non-US companies, the NSM directs the Committee on Foreign Investment in the United States (CFIUS) to scrutinize foreign investments, acquisitions, or transactions that involve sensitive AI intellectual property or other AI-driven technologies with national security relevance. This heightened scrutiny aligns with the broader goal of safeguarding critical AI advancements from external threats while supporting the integrity of US technological leadership.
Implications for Stakeholders
Understanding the scope of the AI NSM’s directives will enable companies to better shape their approaches to AI in areas that touch on national security. Companies developing AI for national security, healthcare, finance, and other sensitive fields will need to align their efforts with these regulatory priorities, ensuring their innovations meet emerging safety, security, and ethical standards. Additionally, stakeholders engaged in international AI collaborations must be mindful of the evolving CFIUS guidelines, particularly regarding the handling of intellectual property and sensitive AI assets in cross-border transactions.