Understanding the need for regulation to ensure the safe use of AI, the European Union (EU) has introduced the world’s most comprehensive legal guideline, EU AI Act, designed to impose strict requirements on AI systems operating within its jurisdiction. Its objectives are clear; however, its implementation and enforcement present challenges and the debate around its impact on innovation continues to get louder.
The EU AI Act, which officially entered into force in August 2024, aims to regulate the development and use of AI systems, particularly those deemed “high-risk.” The primary focus is ensuring AI is safe and ethical and operates transparently within strict guidelines. Enforcement of the Act formally kicked off on February 2, 2025, when the deadline for prohibitions, e.g., certain AI systems, ensuring tech literacy for staff, etc., lapsed.
Noncompliance comes at a price. With the goal of ensuring compliance, companies found in violation may be fined $7.5 million euros ($7.8 million) to $35 million euros ($35.8 million) or 1% to 7% of their global annual revenue. A significant financial deterrent.
The risk classification system is a critical aspect of the AI Act. AI systems are categorized as “prohibited AI practices,” such as biometric technologies that classify individuals based on race or sexual orientation, manipulative AI, and certain predictive policing applications, which are prohibited. Meanwhile, “high-risk” AI systems are permitted but are subject to rigorous compliance measures, including comprehensive risk assessments, data governance requirements, and transparency obligations. AI systems with limited transparency risk are subject to transparency obligations under Article 50 of the AI Act, which requires companies to inform users when they are interacting with an AI system. Finally, AI systems posing minimal to no risk are not regulated.
The EU AI Act is not without some voices in opposition. Other countries and big tech companies are pushing back on its implementation. Tech companies, for example, argue that stringent regulations will dampen innovation, which in turn will make it more difficult for European startups to compete globally. Critics also argue that by imposing heavy compliance burdens, the Act could push AI development out of Europe and into less regulated regions, hindering the continent’s technological competitiveness.
Feeling some overall pressure, the EU has rolled back some of its initial regulatory ambitions, such as repealing the proposed EU AI Liability Directive, which would have made it easier for consumers to sue AI providers. The EU must walk a fine line when it comes to protecting citizens’ rights while cultivating an environment that encourages technological advancement.
A Step in the Right Direction
It is yet to be seen if the EU AI Act will serve as a model for other countries. Long and short, there will be a lot of growing pains, and the EU should expect to have to iterate on the legislation, but overall, it is good to have a starting point from which to critique and iterate. The current framework may not be perfect, but it is a necessary starting point for the global conversation on AI regulation.