The European Commission has made it clear: The timetable for implementing the Artificial Intelligence Act (EU AI Act) remains unchanged. There are no plans for transition periods or postponements. The first regulations have been in force since Feb. 2, 2025, while further key obligations will become binding on Aug. 2. AI Act violations may be punished with significant penalties, including fines of up to EUR 35 million or 7% of global annual turnover.
The AI Act marks the world’s first comprehensive legal framework for using and developing AI. It follows a risk-based approach that links regulatory requirements to the specific risk an AI system entails. Implementation may pose structural, technical, and governance-related challenges for companies, particularly in the area of general-purpose AI (GPAI).
Key Requirements and Compliance Obligations Under the EU AI Act
The AI Act focuses on risky and prohibited AI practices. Certain applications have been expressly prohibited since Feb. 2, 2025. These include, among others:
- biometric categorization based on sensitive characteristics;
- emotion recognition systems in the workplace;
- manipulative systems that influence human behavior without being noticed; and
- social scoring.
These prohibitions apply comprehensively – both to the development and to the mere use of such systems.
On Aug. 2, 2025, comprehensive due diligence, transparency, and documentation requirements will also take effect for various actors along the AI value chain.
The German legislature is expected to entrust the Federal Network Agency
(Bundesnetzagentur) with regulatory oversight. The agency has already set up a central point of contact, the “AI Service Desk,” to serve as a first point of contact for small and medium-sized enterprises, particularly for questions relating to the AI Act’s practical implementation. In addition, companies should closely monitor regulatory developments, for example regarding the final Code of Practice for GPAI models, which was published on July 10, and the harmonization of technical standards, which may become the “best practice” benchmark for compliance.
Which Companies and Stakeholders Are Impacted by the EU AI Act?
General-Purpose AI (GPAI) Providers
Providers of GPAI models – such as large language or multimodal models – will be subject to a specific regulatory regime beginning August 2025. They will be required to maintain technical documentation that makes the model’s development, training, and evaluation traceable. In addition, transparency reports must be prepared that describe the capabilities, limitations, potential risks, and guidance for integrators.
A summary of the training data used must also be published. This must include data types, sources, and preprocessing methods. The use of copyright-protected content must be documented and legally permissible. At the same time, providers must ensure the protection of confidential information.
GPAIs with Systemic Risk
Extended obligations apply to particularly powerful GPAI models that are classified as “systemic.” Classification is based on technical criteria such as computing power, range, or potential impact. Providers of such models must report the system to the European Commission, undergo structured evaluation and testing procedures, and permanently document security incidents. In addition, increased requirements apply in the area of cybersecurity and monitoring.
Downstream Providers and Modifiers
Companies that substantially modify existing GPAI models will themselves become providers for regulatory purposes. A modification is considered substantial if the existing GPAI model is changed through retraining, fine-tuning, or other technical adjustments in such a way that the functionality, performance, or risks of the model change significantly, and the modification does not merely amount to integration or use. This means that all obligations that apply to original GPAI developers also apply to modified models. In practice, fine-tuning in the context of business applications must therefore be carefully reviewed from a legal perspective and, if necessary, secured by regulatory measures.
AI System Users
Companies that merely use AI systems – especially in applications with potentially high risks, such as in recruitment, medicine, or critical infrastructure – are also required to maintain a complete inventory of the systems they use. In addition, they must ensure that prohibited applications are not used. Additional obligations will apply to high-risk AI systems beginning August 2026, such as data protection impact assessments and internal monitoring. The more extensive transparency obligations for AI system users – such as AI-generated content labeling – will not become binding until Aug. 2, 2026.
Technical and Organizational Requirements
The AI Act’s implementation requires not only legal, but structural measures as well. Companies should consider the following to enhance compliance:
- Establishing a complete AI inventory with risk classification;
- Clarifying the company's role (supplier, modifier, or deployer);
- Preparing the necessary technical and transparency documentation;
- Implementing copyright and data protection requirements;
- Training and verifying AI competence among employees (including external staff); and
- Adapting internal governance structures, including the appointment of responsible persons.
The Commission and national supervisory authorities have announced that they will closely monitor implementation. Companies should regularly review and adapt their compliance strategies, particularly with regard to the Codes of Practice and future technical standards.
Early Preparation for EU AI Act Compliance and Risk Mitigation
Aug. 2, 2025, is a binding deadline. Taking stock, clarifying roles, and evaluating systems may help create a solid foundation for regulatory certainty. GPAI providers and modifiers in particular should prepare for a higher level of accountability. But traditional deployers are also required to ensure transparency and control of their AI applications.
Early action may mitigate legal and financial risks, as well as underscore responsibility and future viability in dealing with artificial intelligence.