On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice (the “AI Code”), three weeks before the obligations relating to general-purpose AI models under the EU AI Act are due to come into effect.
Compliance with the AI Code is voluntary but intended to demonstrate compliance with certain provisions of the EU AI Act. According to the European Commission, organizations that agree to follow the AI Code will enjoy a “reduced administrative burden” and further legal certainty compared to others that seek to demonstrate compliance in an alternative manner.
The AI Code is set to be complemented by forthcoming European Commission guidelines expected later this month, which will clarify key concepts related to general-purpose AI models and aim to ensure consistent interpretation and application of key concepts related to general-purpose AI models.
The AI Code is divided into three separately authored chapters: transparency obligations, copyright, and safety and security. Each chapter addresses specific aspects of compliance under the EU AI Act:
- Transparency: This chapter provides a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards. In particular, signatories to the AI Code can comply with the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.
- Copyright: This chapter details how to demonstrate compliance with Article 53(1)(c) of the EU AI Act which requires a provider put in place a policy to comply with EU law on copyright and to identify and comply with expressed reservations of rights. The AI Code provides several measures to demonstrate compliance with Article 53(1)(c), such as implementing a copyright policy which incorporates the other measures of the chapter and designating a point of contract for complaints concerning copyright.
- Safety and security: This chapter only applies to providers responsible for general-purpose AI models with systemic risk and relates to the obligations under Article 55 of the EU AI Act. The chapter details the measures needed to assess and mitigate risks associated with these advanced models, such as creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation, implementing appropriate safety and security mitigations, and developing a model report containing details about the AI model, systemic risk assessment and mitigation processes that may be shared with the AI Office.