On October 30, 2023, The Biden Administration announced its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). Building on the White House’s Blueprint for an AI Bill of Rights, the AI EO created a framework allowing for innovation in artificial intelligence (“AI”) while setting standards and protections in the use and development of AI. You can read more about the AI EO, and other AI-related developments, here.
Notably, the AI EO directed executive departments and agencies to adhere to eight guiding principles and priorities when undertaking actions set forth in the AI EO, while also considering the views of other agencies, industry, academics, civil society, international allies, and other relevant organizations. To date, various agencies have acted pursuant to 90-day, 180-day, and 270-day deadlines, which you can read more about here, here, and here, respectively. Various agencies are also required to take action by the 365-day deadline at the end of October 2024. The eight principles and priorities are:
- Safety and Security: requiring evaluations of AI systems to understand and mitigate risks before the AI system is put to use, as well as development of labeling and content provenance mechanisms to indicate when content is generated using AI.
- Responsible Innovation, Competition, and Collaboration: requiring investment in education, training, development, research, and capacity for AI to continue the development of AI systems to solve societal problems, while also protecting the intellectual property of inventors and creators.
- Support of American Workers: requiring considerations to ensure that the American workforce is able to participate in newly created AI jobs and industries, including protections of workers’ rights and safety, education and training, and allowance of collective bargaining.
- Advancement of Equity and Civil Rights: requiring that AI complies with all federal laws to prevent unlawful discrimination, bias, and abuse.
- Protection of AI Users: requiring continuing enforcement of consumer protection laws and enactment of safeguards against fraud, bias, discrimination, privacy infringement, and other harms, especially in healthcare, financial services, education, housing, law, and transportation.
- Protection of Consumer Privacy and Civil Liberties: requiring that collection, use, and retention of personal data is lawful, secure, and confidential, including by implementing privacy-enhancing technologies.
- Risk Management and Governance: requiring the use of the federal government’s internal capacity to regulate, govern, and support the responsible use of AI, including attracting AI professionals to assist the federal government with harnessing and governing AI and developing a framework to manage risks associated with AI.
- Technological Innovation and Progress: requiring engagement with international allies in developing a framework and promoting common approaches to shared challenges related to AI and continual technological advancements.
Agencies subject to the AI EO are those described in 44 U.S.C. 3502(1), which include any executive department, military department, government corporation, and any establishment in the executive branch of the government. “Agency” in relation to the AI EO does not include independent regulatory agencies which include the Federal Reserve System, Commodity Futures Trading Commission, Federal Communications Commission (“FCC”), Federal Trade Commission (“FTC”), Securities and Exchange Commission (“SEC”), and other similar independent regulatory agencies found in 44 U.S.C. 3502(5).
Beyond the above principles and priorities, the AI EO also requires specific agency action within certain timelines. The following charts contain the agency requirements pursuant to the AI EO, and what agencies have accomplished in response. In addition to the below, the AI EO also requires agencies to increase AI talent in government by increasing recruiting efforts and streamlining and facilitating the processing of visa petitions and applications for noncitizens who seek to work on, study, or research AI or other critical and emerging technologies. Additionally, the AI EO requires efforts by agencies to promote competition and innovation in AI, including in the semiconductor industry, as well as to advance and promote global AI standards. Finally, the AI EO requires efforts by the Attorney General to address unlawful discrimination and other civil rights and criminal justice harms that may be exacerbated by AI, and requires the OMB and other agencies to identify certain privacy risks and develop standards to protect consumer privacy.
Please contact the Squire Patton Boggs authors if you have any questions about the AI EO.
⸻
WITHIN 60 DAYS OF THE DATE OF THE AI EO – DECEMBER 29, 2023
Agencies | Requirements | Action to Date |
Director of the Office of Management and Budget (“OMB”) | Convene and chair an interagency council to coordinate the development and use of AI in agency programs and operations.
Develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with federal policy on AI. |
The OMB has convened an interagency council to coordinate agencies’ use of AI. |
⸻
WITHIN 90 DAYS OF THE DATE OF THE AI EO – JANUARY 28, 2024
Agencies | Requirements | Action to Date |
Secretary of Commerce | Require companies developing potential dual-use AI systems to provide the federal government with information, reports, or records regarding the training, development, and production of AI systems, including results of red-team safety testing.
Propose regulations that require U.S. Infrastructure as a Service (“IaaS”) providers to submit a report when a foreign person transacts with that IaaS provider to train a large AI model with potential capabilities that can be used in a malicious cyber-enabled activity. |
The Department of Commerce has compelled AI developers to report information about AI systems, including safety test results and large computing clusters able to train AI systems.
The Department of Commerce proposed a draft rule on January 29, 2024, to require cloud providers to alert the federal government when foreign clients train AI models using computing power from U.S. cloud companies. |
Agencies with authority over critical infrastructure and heads of Sector Risk Management Agencies, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security | Evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors. | Nine agencies submitted risk assessments to the Department of Homeland Security analyzing the risks of the use of AI in critical infrastructure sectors. The nine agencies include the Department of Defense, the Department of Transportation, the Department of Treasury, and the Department of Health and Human Services. |
Director of the National Science Foundation (“NSF”), in coordination with heads of agencies | Launch a pilot program implementing the National AI Research Resource. | The NSF has launched a pilot of the National AI Research Resource, which aims to provide a national infrastructure for delivering computer power, data, software, access to AI models, and other AI-related training resources for students and researchers. |
Secretary of Health and Human Services (“HHS”), in consultation with:
Secretary of Defense, and Secretary of Veterans Affairs |
Establish an HHS AI Task Force that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector. | HSS has established an AI Task Force to develop policies regarding the use and innovation of AI in healthcare. The AI Task Force published guiding principles for addressing racial bias in healthcare algorithms on December 15, 2023. |
Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (“NIST”) and in coordination with:
Director of OMB, and Director of the Office of Science and Technology Policy (“OSTP”) |
Develop guidelines, tools and practices to support implementation of minimum risk-management practices. | NIST has released several documents for public comment, including the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” focusing on managing generative AI risks, securely developing generative AI systems and dual-use foundation models, expanding international standards development in the field of AI, and reducing risks posed by AI-generated content. |
Administrator of General Services, in coordination and consultation with:
Director of OMB, Federal Secure Cloud Advisory Committee, and Other relevant agencies |
Develop and issue a framework for prioritizing critical and emerging technologies offerings in the Federal Risk and Authorization Management Program authorization process, starting with generative AI. | The General Services Administration has released a draft framework, and later a final framework on June 27, 2024, on prioritizing generative AI technologies in security authorizations for products and services procured by the federal government. |
⸻
WITHIN 180 DAYS OF THE DATE OF THE AI EO – April 27, 2024
Agencies | Requirements | Action to Date |
Secretary of Commerce | Propose regulations that require U.S. IaaS providers to ensure that foreign resellers of U.S. IaaS products verify the identity of any foreign person who obtains an IaaS account from the foreign reseller. | The Department of Commerce proposed a draft rule on January 29, 2024, to require cloud providers to alert the federal government when foreign clients train AI models using computing power from U.S. cloud companies. |
Secretary of Commerce, acting through the Director of NIST, in coordination with the Director of OSTP, and in consultation with:
Secretary of State, Secretary of HHS, and heads of other agencies |
Initiate an effort to engage with industry and relevant stakeholders to develop and refine specifications, best practices, technical implementation guides, and conformity assessment practices for possible use by synthetic nucleic acid sequence providers. | The Department of Commerce has launched an effort to engage the nucleic acid synthesis industry on necessary technical implementation details to facilitate the adoption of a screening framework. |
Secretary of Homeland Security, in coordination with:
Secretary of Commerce, Sector Risk Management Agencies, and other regulators |
Incorporate the AI Risk Management Framework and other appropriate security guidance into relevant safety and security guidelines for use by critical infrastructure owners and operators.
Establish an AI Safety and Security Board as an advisory committee, which must include AI experts from the private sector, academia, and government. |
The Department of Homeland Security has incorporated the AI Risk Management Framework, and other related guidance, into security guidelines covering critical infrastructure.
The Department of Homeland Security has launched the AI Safety and Security Board to advise the Department of Homeland Security, the critical infrastructure community, other private sector stakeholders, and the public on the safe and secure development and deployment of AI technology. |
Secretary of Defense and the Secretary of Homeland Security | Each to develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities to aid in the discovery and remediation of vulnerabilities in critical United States government software, systems, and networks. | The Department of Defense has piloted new AI tools to identify vulnerabilities in vital government software systems used for national security and military purposes. The Department of Defense also launched additional tools to identify and close vulnerabilities in other critical government software systems commonly relied on by Americans. |
Secretary of Homeland Security, in consultation with:
Secretary of Energy, and Director of OSTP |
Evaluate the potential for AI to be misused to enable the development or production of chemical, biological, radiological, or nuclear (“CBRN”) threats.
Consult with experts in AI and CBRN issues from the Department of Energy, private AI labs, academia, and third-party model evaluators to evaluate AI model capabilities to present CBRN threats as well as options for minimizing the risks of AI model misuse to generate or exacerbate those threats. Submit a report to the President that describes the progress of the above efforts, including an assessment of the types of AI models that may present CBRN risks to the U.S. and recommendations for regulating or overseeing the training, deployment, publication, or use of these models. |
The Department of Homeland Security has evaluated and submitted a report to the President that discusses the potential for AI to cause or exacerbate CBRN threats, and how such threats can be countered. |
Secretary of Homeland Security, in consultation with the heads of other relevant agencies | Develop a framework, and submit reports pursuant to the framework, to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening. | The Secretary of Homeland Security, using studies by the Department of Homeland Security, Department of Energy, and the OSTP, has established a framework for nucleic acid synthesis screening to help prevent the misuse of AI for engineering dangerous biological materials. |
All agencies that fund life-sciences research | Establish that, as a requirement of funding, synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework required immediately above, such as through an attestation from the provider or manufacturer. | The Department of Commerce has worked to engage the private sector to develop technical guidance to facilitate the implementation of the Secretary of Homeland Security’s framework for nucleic acid synthesis. |
Secretary of Homeland Security, acting through the Director of the National Intellectual Property Rights Coordination Center and in consultation with the Attorney General | Develop a training, analysis, and evaluation program to mitigate AI-related IP risks. | The Department of Homeland Security has established a training program to help industry and domestic law enforcement better understand and respond to AI-related intellectual property risks. |
Secretary of Energy, in consultation with:
Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and heads of other relevant agencies |
Issue a public report describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power.
Develop tools that facilitate building foundation models useful for basic and applied science, including models that streamline permitting and environmental reviews while improving environmental and social outcomes. Collaborate with private sector organizations and members of academia to support the development of AI tools to mitigate climate change risks. Expand partnerships with industry, academia, other agencies, and international allies and partners to utilize computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security. Establish an office to coordinate the development of AI and other critical and emerging technologies across Department of Energy programs and the 17 national laboratories. |
The Department of Energy has published a report outlining opportunities that AI brings to advance the clean energy economy and modernize the electric grid.
The Department of Energy has launched new AI tools to streamline permitting processing and improving siting for clean energy infrastructure, along with other tools. The Department of Energy has launched partnerships to address energy challenges and advance clean energy and has begun to convene energy stakeholders and experts to assess potential risks to the grid. The Department of Energy has developed and expanded AI testbeds and model evaluation tools. The Department of Energy has created an office to coordinate the development of AI and other critical and emerging technologies across the agency. |
President’s Council of Advisors on Science and Technology | Submit to the President and make publicly available a report on the potential role of AI in research aimed at tackling major societal and global challenges. | The Council of Advisors on Science and Technology has authored a report, published on April 23, 2024, on AI’s role in advancing scientific research to tackle societal challenges. |
Secretary of Labor, in consultation with other agencies and outside entities | Develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.
Issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked. |
The Department of Labor has developed principles and practices for employers and developers to build and deploy AI safely to empower workers.
The Department of Labor has also issued guidance to assist federal contractors and employers in complying with worker protection laws as they deploy AI. The Department of Labor provided separate guidance dedicated to the application of the Fair Labor Standards Act. |
Secretary of HHS, in consultation with relevant agencies | Publish a plan addressing the use of automated or algorithmic systems in the implementation of public benefits and services administered by the Secretary.
Develop a strategy to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, and develop an AI assurance policy and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare technology. Consider appropriate actions to advance the prompt understanding of and compliance with federal nondiscrimination laws and how those laws relate to AI. |
HHS has published a plan with guidelines on managing the risks of AI in the administration of benefits programs.
HHS has developed a strategy for ensuring the safety and effectiveness of AI deployed in the healthcare sector. The strategy includes frameworks for AI testing and evaluation and outlines future actions for HHS to promote responsible AI development and deployment. A final rule has been announced that clarifies that nondiscrimination requirements in health programs and activities continue to apply to the use of AI, clinical algorithms, predictive analytics, and other tools. |
Secretary of Agriculture | Issue guidance to state, local, tribal, and territorial public benefits administrators on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by the Secretary. | The Department of Agriculture has published guidance and principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs, including how the government at all levels should manage AI risks in benefits programs. |
Secretary of Housing and Urban Development | Issue guidance addressing the use of AI in tenant screening and advertising of housing and other real estate-related transactions. | The Department of Housing and Urban Development has published two guidance documents – one on the Fair Housing Act’s application of screening applicants and another on the application of advertising of housing – affirming that existing prohibitions against discrimination apply to AI’s use for tenant screening and advertisement of housing opportunities and explaining how deployers of AI tools can comply with these obligations. |
Director of the Office of Personnel Management, in coordination with the Director of OMB | Develop guidance on the use of generative AI for work by the federal workforce. | The Office of Personnel Management has developed guidance on the use of generative AI by the federal workforce. |
Administrator of General Services, in coordination and collaboration with:
Director of OMB, Secretary of Defense, Secretary of Homeland Security, Director of National Intelligence, Administrator of NASA, and head of any other agency |
Take steps to facilitate access to federal government-wide-acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools. | The General Services Administration has created a resource guide for federal AI acquisition. |
⸻
WITHIN 240 DAYS OF THE DATE OF THE AI EO – June 26, 2024
Agencies | Requirements | Action to Date |
Secretary of Commerce, in consultation with the heads of other relevant agencies | Submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques for authenticating, labeling, detecting, and auditing synthetic content, as well as preventing synthetic content from producing child sexual abuse material and non-consensual intimate imagery of real individuals.
Within 180 days of submitting the report, develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures. |
The Gender Policy Council and Office of Science and Technology Policy has issued a call to action to combat image-based sexual abuse, including synthetic content generated by AI. |
Director of NSF | Identify ongoing work and potential opportunities to incorporate privacy-enhancing technologies (“PETs”) into the operations of agencies. | The NSF has launched a $23 million initiative to promote the use of PETs to solve societal problems. NSF will invest through its new Privacy-Preserving Data Sharing in Practice program to apply, mature, and scale PETs for specific use cases and establish testbeds to accelerate their adoption. |
⸻
WITHIN 270 DAYS OF THE DATE OF THE AI EO – July 26, 2024
Agencies | Requirements | Action to Date |
Secretary of Commerce, acting through the Director of NIST in coordination with:
Secretary of Energy, Secretary of Homeland Security, and heads of other relevant agencies |
Establish guidelines and best practices to promote consensus industry standards for the development and deployment of safe, secure, and trustworthy AI systems, including by launching a companion resource to NIST’s AI Risk Management Framework and to the Secure Software Development Framework, and launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities.
Establish appropriate guidelines to enable AI developers to conduct red-teaming tests, including developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models and developing testing environments such as test beds and PETs. |
As discussed above, NIST has released the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.”
The AI Safety Institute has released for public comment new technical guidelines for AI developers in managing the evaluation of misuse of dual-use foundation models. NIST has also published final frameworks on managing generative AI risks and securely developing generative AI systems and dual-use foundation models. |
Secretary of Energy, in coordination with the heads of other Sector Risk Management Agencies | Develop and implement a plan for developing AI model evaluation tools, AI testbeds, and model guardrails to assess the capabilities and reduce risks of AI systems, specifically AI systems’ abilities to generate outputs that may pose a threat to nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security sectors. | The Department of Energy has developed AI safety and security guidelines for critical infrastructure owners and operators. |
Secretary of Defense and the Secretary of Homeland Security | Each provides a report to the Assistant to the President for National Security Affairs on the results of actions taken pursuant to the plans and operational pilot projects, including a description of any vulnerabilities found and fixed through the development and deployment of AI capabilities and any lessons learned on how to identify, develop, test, evaluate, and deploy AI capabilities effectively for cyber defense. | The Department of Defense and Department of Homeland Security have reported findings from their AI pilots to address vulnerabilities in government networks used for national security purposes and for civilian government, building on their pilot requirements within 180 days of the AI EO. |
Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State | Solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implementations, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available. Based on the input, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models. | Pursuant to extensive outreach to experts and stakeholders, the Department of Commerce has prepared and will soon release a report on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, including related policy recommendations. |
Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office | Issue guidance to U.S. Patent and Trademark Office patent examiners and applicants to address other considerations at the intersection of AI and IP. | The U.S. Patent and Trademark Office has published guidance on evaluating the eligibility of patent claims involving inventions related to AI technology. |
⸻
WITHIN 365 DAYS OF THE DATE OF THE AI EO – October 29, 2024
Agencies | Requirements | Action to Date |
Secretary of Veteran Affairs | Host two 3-month nationwide AI Tech Sprint competitions.
Provide participants of such competitions access to technical assistance, mentorship opportunities, individualized expert feedback, potential contract opportunities, and other programming and resources. |
The Department of Veterans Affairs has hosted two nationwide AI tech spring competitions. |
Secretary of Labor | Publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. | The Department of Labor published a guide for federal contractors and subcontractors to clarify contractors’ legal obligations, promote equal employment opportunity, and mitigate the potential harmful impacts of AI in employment decisions. |
Secretary of Health and Human Services, in consultation with:
Secretary of Defense and Secretary of Veterans Affairs |
Establish an AI safety program that establishes a common framework to identify and capture clinical errors resulting from AI deployed in healthcare settings, analyze captured data and evidence to develop recommendations to avoid harm, and disseminate those recommendations to stakeholders and healthcare providers. | |
Secretary of Education | Develop resources, policies, and guidance regarding AI, addressing the safe, responsible, and nondiscriminatory use of AI in education.
Develop an AI toolkit for education leaders. |
The Department of Education has released a guide for designing safe, secure, and trustworthy AI tools for use in education. |
Secretary of Commerce, acting through the Director of NIST | Create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections. |
⸻
WITHIN 540 DAYS OF THE DATE OF THE AI EO – April 22, 2025
Agencies | Requirements | Action to Date |
Director of NSF | Establish at least four new National AI Research Institutes (in addition to the 25 funded as of the date of the AI EO). | |
Secretary of Health and Human Services | Develop a strategy for regulating the use of AI or AI-enabled tools in drug development processes. |