The Framework Convention on Artificial Intelligence (the “Framework Convention”), the first legally binding international treaty aimed at addressing AI safety, was officially opened for signature on Sept. 5. The Framework Convention applies to both public and private sectors, and it was signed by key global players, including the United States, European Union, and United Kingdom, along with other nations like Israel and Norway. While this is a global initiative, U.S. companies utilizing “artificial intelligence systems” (defined as a machine-based system that infers from the input it receives, for explicit or implicit objectives, how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments) will need to understand what this agreement means in terms of practical impacts on their operations, especially in terms of compliance, timelines, and enforcement.
Timelines for implementation
The timeline for adopting the requirements of the Framework Convention will vary based on how quickly individual signatory nations, including the U.S., can integrate these provisions into their domestic legal systems. U.S. companies should prepare for a phased approach, where regulatory guidance will likely emerge gradually. However, with increased attention on AI safety, it is reasonable to expect an expedited implementation compared to past international agreements.
Key requirements for U.S. companies
For U.S. companies utilizing artificial intelligence systems, several critical requirements will arise from the Framework Convention’s provisions:
- Human rights protections: The Framework Convention mandates that companies deploying artificial intelligence systems must ensure that these systems do not violate human rights. This means companies will need to enhance their AI governance structures to ensure outputs and decisions do not harm individuals, particularly in areas like discrimination, privacy, and due process.
- Documentation and transparency: One of the key provisions is the requirement to maintain detailed records of artificial intelligence systems that impact human rights. U.S. companies should anticipate the need for robust documentation and traceability of AI decision-making processes, including data inputs, algorithms used, and outcome reasoning. This transparency is intended to enable affected individuals to challenge AI decisions, particularly in scenarios where bias or unfair outcomes are alleged.
- Complaint mechanisms: Companies will also need to establish or refine their procedures for handling complaints related to artificial intelligence systems decisions. This might involve working with U.S. regulators to ensure there are accessible channels for individuals to file grievances and contest AI-driven decisions, particularly those that affect employment, financial services, or other critical rights.
Enforcement and compliance expectations
While the Framework Convention sets the stage for international collaboration on AI safety, enforcement remains in the hands of each country’s regulators. In the U.S., we can expect that the Federal Trade Commission (FTC) will play a significant role in ensuring compliance with these provisions. The FTC has already demonstrated a strong focus on AI and algorithmic transparency. However, enforcement may vary depending on the industry sector. For example, industries like finance, healthcare, and insurance—where artificial intelligence systems can have significant human impacts—are likely to see stringent and faster oversight compared to sectors where AI impacts may be less immediate.
The Framework Conventionis a step toward globally regulating artificial intelligence systems. For U.S. companies, the focus should now be on anticipating regulatory changes and preparing to meet the heightened standards of transparency and accountability that will be required.