On October 30, 2023, U.S. President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. It marks the Biden Administration’s most comprehensive action on artificial intelligence policy, building upon the Administration’s Blueprint for an AI Bill of Rights (issued in October 2022) and its announcement (in July 2023) of securing voluntary commitments from 15 leading AI companies to manage AI risks.
The new EO directs actions across eight areas, described by the White House as follows:
- New standards for AI safety and security;
- Protecting Americans’ privacy;
- Advancing equity and civil rights;
- Standing up for consumers, patients, and students;
- Supporting workers;
- Promoting innovation and competition;
- Advancing American leadership abroad; and
- Ensuring responsible and effective government use of AI.
Notably, the Order requires private companies to share with the federal government the results of “red-team” safety tests for foundation models that pose certain risks, directs the development of new AI standards to guide government agencies’ acquisition and use of AI, creates a new National AI Research Resource to foster U.S. leadership in AI innovation, and pledges cooperation with international partners on frameworks for responsible AI development and deployment.
The Order’s provisions are summarized in further detail below.
Standards for AI Safety and Security
- Red-teaming requirements. The Administration will leverage the Defense Production Act to require developers to notify the federal government of any foundation models that pose a serious risk to national security, national economic security, or national public health and safety. Developers must also share the results of “red-team” exercises with the government.
- New standards. The National Institute of Standards and Technology will set new standards for red-team testing. The Departments of Homeland Security and Energy will draw on these standards to address critical infrastructure risks, as well as cybersecurity, chemical, biological, radiological, and nuclear risks. In addition, federal agencies will develop new standards for biological synthesis screening, and these standards will be used as conditions to receive federal funding for life sciences research. Finally, the Department of Commerce will develop new standards for content authentication and watermarking for AI-generated content, and U.S. federal agencies will use these standards to mark content that they generate.
- Addressing software vulnerabilities. The Administration will create a new program to develop AI tools to investigate and address vulnerabilities in critical software.
- AI use by the military and intelligence community. The National Security Council and White House Chief of Staff will develop a National Security Memorandum to guide safe and ethical use of AI by the military and intelligence community.
Protecting Americans’ Privacy
- Call for federal privacy legislation. The Administration calls on Congress to pass privacy legislation to “protect all Americans, especially kids.”
- Privacy-enhancing technologies (PETs). The White House will prioritize development of privacy-preserving and privacy-enhancing technologies (supporting their research and development with new federal programs) and will develop guidelines for federal agencies’ use of these technologies.
- Guidance on use of personal information by federal agencies. The Administration will reevaluate how federal agencies use commercially obtainable personal information and strengthen privacy guidance for federal agencies.
Advancing Equity and Civil Rights
- New guidance and training to guard against discrimination. The Administration will issue guidance to landlords, federal benefits programs, and federal contractors to prevent AI algorithms from contributing to discrimination and will use training and technical assistance to advance best practices for investigating and prosecuting AI-related civil rights violations.
- Criminal justice system fairness. The Administration will develop best practices to guide the use of AI throughout the criminal justice system, including the use of AI in sentencing, policing, and forensic analysis.
- Standing Up for Consumers, Patients, and Students. The Department of Health and Human Services will establish a safety program to address risks associated with the use of AI in healthcare. The Administration will also support the expansion of AI-enabled tools in education.
- Supporting Workers. The Administration will support new research on AI’s potential impacts on the employment market and will develop principles and best practices to address potential job displacement, labor standards, workplace health and safety, and workplace data collection.
- Promoting Innovation and Competition. The Administration will support U.S. leadership in AI through the creation of a National AI Research Resource, which will broaden access to AI resources and data for researchers and students. It will also expand grants and technical assistance for AI innovation, and it will encourage AI experts from abroad to work and study in the United States.
- Advancing American Leadership Abroad. The State and Commerce Departments will work with international partners to establish frameworks that advance the realization of benefits and the mitigation of risks associated with AI. The U.S. is committed to working with other countries on the development of secure, trustworthy, and interoperable AI standards.
- Ensuring Responsible and Effective Government Use of AI. The White House will issue new guidance for agencies’ use of AI, including with respect to AI procurement, deployment, and hiring of skilled personnel through an “AI talent surge.”