Regulatory frameworks and policies surrounding artificial intelligence (AI) are emerging as a hot-button area of contention for technology companies, especially with integrating generative AI into globally used, everyday software. Given that generative AI is multijurisdictional and global, examining their variations is imperative to design effective compliance programs to prevent costly litigation.
The United States and the European Union are two landscapes currently in a race to implement policies and regulations that provide guardrails for AI risk management and abuse prevention. Regulations promulgated by the EU and the US will likely have precedent-setting effects on the future of how technology companies design compliance programs on AI software integration. Since both landscapes have different approaches to setting guidelines on AI, there is confusion about how to design effective compliance programs that adhere to and comply with both bodies of regulation.
EU Versus US Approach to AI Legislation
While the US has promulgated several state-level regulations to address the explosion of generative AI use, there are no official federal regulations. The US federal government, as of now, has taken a risk-based, sectoral-specific, high-level agency approach to tackle AI risk management. Furthermore, the US legislative approach focuses on damage control instead of damage prevention. This is evidenced by different federal agencies that have drafted several guidance documents; the lack of official federal regulations that hold statutory weight contributes to an uneven development of AI policies. Furthermore, the level of stringency in AI risk management policy guidance heavily depends on the political administration, as conservative administrations tend to steer away from regulation, while more liberal administrations lean towards a harsher and stricter regulatory framework.
The EU’s approach to AI risk management, while complex and multifaceted in its own right, is focused on building a solid legislative foundation that focuses on a more cohesive strategy to tackle substantive issues related to AI. In stark contrast with the US approach, the EU takes a preventative approach to solving anticipated issues revolving around AI.
The EU Artificial Intelligence Act Versus The US Federal AI Governance and Transparency Act
The European Artificial Intelligence Act is the first comprehensive law in the world to regulate AI.[1] This AI Act provides AI developers and deployers with clear obligations and guidelines on dealing with AI by analyzing and creating a risk-level management strategy to mitigate harm. Furthermore, this regulation seeks to minimize small and medium-sized companies’ administrative and financial burdens. The end goal of the act is to ensure that the growing use and implementation of AI does not infringe on fundamental rights and promotes safe and ethical practices.
The US’s proposed Federal AI Governance and Transparency Act[2] It focuses on creating a uniform federal standard by consolidating other existing legislation that affects AI and by creating transparency and accountability in the usage of AI. This bill focuses on defining federal standards for responsible AI use through codifying critical safeguards for the development, acquisition, use, management, and oversight of AI used by federal agencies; creating a cohesive and consistent federal AI use policy authority and guidance by clarifying and re-codifying the current role of the Office of Management and Budget (OMB) in issuing policy guidance spanning all federal agencies; updating existing Privacy Act Personally Identifiable Information (PII) record notice requirements; streamlining and revising existing law regarding the government’s use of AI and removing repetitive and confusing provisions in the AI in Government Act of 2020 and the 2022 Advancing American AI Act; establishing agency AI governance charters, thus now requiring the official publication of governance charters for AI systems deemed high-risk and other AI systems used by federal agencies that interact with sensitive personal records under the Privacy Act; and creating additional mechanisms for public accountability by setting up a notification process for individuals and entities that have been substantially affected by an agency determination influenced by AI.
The key is to examine now where the differences between the two regulations lie. First, the EU AI Act entered into force on August 1, 2024, and will be fully applicable in two years, barring a few exceptions.[3] The US Federal AI Governance and Transparency Act was introduced to the House of Representatives only in March of 2024. Second, the EU AI Act focuses on risk classification and categorizing AI systems to which technology companies must tailor their compliance efforts. The US Federal AI Governance and Transparency Act is tailored to provide a more cohesive regulatory framework for federal agencies to provide a more uniform front on handling AI use in general. To date, no comprehensive legislation directly regulates the use and implementation of AI. Furthermore, the US has not created a classification system based on the risk of AI systems in relevant frameworks and legislation. Given the rapid pace of AI development and global integration, the US approach to risk management and rule promulgation will be through novel litigation, unfortunately, due to the lack of explicit legal authority to regulate AI algorithms. In contrast, generally, the EU can enforce its rulemaking on AI developers and applications due to clearly outlined investigatory powers and because they have apparent authority to impose significant fines for non-compliance.
Due to such diverging ideologies about risk management, it becomes challenging to compare and contrast the different legislation coming out of the EU with the patchy legislation coming out of the US.
How Should AI Companies Navigate the Different Frameworks?
Given the comprehensive nature of the EU AI Act and the patchy, still-developing framework in the US, AI developers must figure out how to tailor their compliance standards to prevent future litigation. The multijurisdictional nature of technology gives us one answer that appears to be facially simple: tailor your compliance and risk mitigation programs to the strictest regulatory framework currently exists. Currently, the most comprehensive framework exists in EU legislation, so technology companies should begin developing their risk mitigation strategies to meet the EU standards as an initial starting point. As the US begins introducing more federal hardline statutory guidance, it is advisable to adjust the programs to have a more synergistic effect with the US framework.
By taking EU strategies for risk management and combining it with US litigation management, companies can begin to protect themselves on the front end and back end. The EU and US are going to be precedent setting in creating governing foundational policies of AI risk management which will deepen crucial collaboration between these governments along with technology companies to ensure there is adequate safety and security in global AI governance.
[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=The%20AI%20Act%20is%20the,what%20AI%20has%20to%20offer.
[2] https://www.congress.gov/bill/118th-congress/house-bill/7532/text
[3] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=The%20AI%20Act%20is%20the,what%20AI%20has%20to%20offer. (Namely, prohibitions will take effect after six months, the governance rules and the obligations for general purpose AI models will become applicable after twelve months, and the rules for AI systems embedded into regulated products will apply after 36 months.