On Oct. 30, President Biden issued an executive order on the safe, secure, and trustworthy development and use of artificial intelligence (AI). The executive order evidences the U.S. government’s commitment to fostering the development and responsible use of artificial intelligence (AI). The executive order will have an impact on both public and private sectors, as the directives encompass areas such as NIST security frameworks, federal regulations, and requirements for contracting.
There are eight guiding principles and priorities in the executive order: (1) new standards for AI safety and security; (2) protecting Americans’ privacy; (3) advancing equity and civil rights; (4) standing up for consumers, patients, and students; (5) supporting workers; (6) promoting innovation and competition; (7) advancing American leadership abroad; and (8) ensuring responsible and effective government use of AI. The White House has published a fact sheet summarizing these eight principles and priorities.
The executive order contains timelines and directives for multiple federal agencies. These directives instruct the agencies to establish standards, provide guidance, and leverage their existing authorities to oversee the application of AI.
The executive order addresses a broad spectrum of issues, including:
AI Safety and Security: The executive order directs federal agencies to take steps to ensure that AI systems are safe and secure, and that they are not used in ways that could harm the public. This includes developing and implementing risk management frameworks for AI systems and conducting regular security assessments. The executive order imposes new regulatory requirements on businesses developing or attempting to develop foundation models that could impact national security, including national economic security and public health (so called “dual-use foundation models” for their applicability in both civilian and military contexts). These requirements stem from the Defense Production Act, authorizing the president to influence industry for national defense purposes. The secretary of commerce, in collaboration with relevant officials, will define the technical specifications for models and compute resources subject to these regulations. Initially, the thresholds are set relatively high, likely impacting only large cloud computing providers or AI-model developers. However, as these requirements apply during the development and acquisition phases, businesses should carefully evaluate whether their activities could fall under these regulations.
Privacy Protection: The executive order directs federal agencies to take steps to protect the privacy of individuals when developing and using AI systems. This includes ensuring that AI systems are designed to collect, use, and share data in a transparent and accountable manner and the further development of privacy enhancing technologies (PETs).
Equity and Civil Rights: The executive order mandates that federal agencies implement measures to ensure the fair and equitable development and use of AI systems, free from discrimination against any individual or group. This entails developing and implementing AI equity assessments and actively working to mitigate bias in AI systems.
Consumer and Worker Protection: The executive order instructs federal agencies to safeguard consumers and workers from potential AI-related harm. This involves developing and implementing guidelines for the responsible use of AI in the workplace and ensuring that consumers are adequately informed about the risks and benefits of AI products and services.
Innovation and Competition: The executive order charges federal agencies with fostering innovation and competition within the AI sector. This entails allocating resources for AI research and development, while also supporting the advancement of novel and emerging AI technologies.
American Leadership: The executive order mandates that federal agencies undertake initiatives to maintain the United States' global leadership position in AI development and utilization. This entails collaborating with international partners to establish shared AI standards and norms while advocating for responsible AI practices worldwide.
Joseph "Joe" Damon, Leslie Green, Jackson Parese, and Marc Jenkins contributed to this article.