On February 2, 2024, European Union policymakers reached an agreement on the final text of the Artificial Intelligence Act (AI Act), taking us another step closer to the world’s first comprehensive AI legislation, and, fortunately, giving employers much-needed guidance on their forthcoming notice, human oversight, and other obligations.
Quick Hits
- Assuming that last week’s leaked text is, in fact, the current draft, the AI Act will require employers to notify employees and workers’ representatives before implementing “high-risk AI systems,” such as those used in recruiting or for other employment-related decision-making.
- The AI Act will also require employers to follow “instructions of use” provided by the producers of such high-risk AI systems, implement human oversight by individuals with adequate training, monitor use of the system for issues like discrimination, and retain records of the AI output, all while also maintaining compliance with existing data privacy obligations.
- Employers can expect other countries to quickly follow suit with legislation modeled on the AI act.
Following a provisional agreement in December 2023, and last week’s leak of the latest and purported final draft, EU policymakers have now officially agreed on the final text of the AI Act, which appears to be more prescriptive for employers than the original draft. Following procedural formalities, including a formal vote and publication in the Official Journal of the European Union, and barring any surprises, the AI Act will enter into force twenty days later and become effective twenty-four months after that (with some exceptions). Although that may seem like a long time, getting into compliance will be a heavy lift for employers. And the lift is likely to be even heavier, given that EU member states may implement their own regulations in the interim with shorter compliance windows. In fact, Portugal has already codified notice obligations for employers using AI in the workplace.
Employer Obligations Under the AI Act
Under the AI Act’s four-tiered, “risk-based approach” to regulating AI, the higher a tool’s risk, the stricter the rules governing it. AI tools intended to be used in the workplace are generally considered “high risk,” and therefore subject to significant regulations. Fortunately, most of the obligations fall on those creating AI tools, but there are also requirements for employers.
Generally, under the AI Act, employers must:
- inform workers’ representatives and affected workers that they will be subject to the AI system;
- implement human oversight by individuals who have adequate competence, training, and authority, as well as the necessary support;
- monitor use of the system, and if an issue like discrimination arises, immediately suspend using the system and notify both the provider, the importer or distributor, and the “relevant market surveillance authority”;
- maintain the logs automatically generated by the system for an appropriate period, which must be at least six months; and
- comply with any applicable data privacy laws.
Other requirements may apply depending upon the circumstances, such as the employer’s industry, what the AI system does, and whether the employer has control over the data inputted into the AI system.
Extraterritoriality and Heavy Fines
The AI Act is relevant for any employer using AI systems with output “intended to be used” in the European Union. That means even employers without a physical presence in the European Union may have compliance obligations under the AI Act if, for example, they make job postings available to EU candidates or if they have independent contractors or contingent workers in the European Union.
Also, much like the European Union’s General Data Protection Regulation (GDPR), there are significant penalties for noncompliance. The AI Act’s penalties range from €7.5 million (approximately USD $8 million) or 1.5 percent of a company’s total worldwide annual turnover (whichever is higher) to €35 million (approximately USD $38 million) or 7 percent of a company’s total worldwide annual turnover (whichever is higher).
A Model for Other Countries
The AI Act is, as it was intended to be, a comprehensive framework for AI, regulating AI in everything from policing to commercial products to education and employment, and everything in between. The legal community and policy scholars agree that it is likely to be to AI what the GDPR was to data privacy.
Employers using AI tools, and particularly those with cross-border operations, may wish to follow these developments and evaluate proactive compliance efforts.