The EU AI Act is entering into force in stages. While most of its provisions will not apply until August 2026, key requirements for general-purpose AI (GPAI) models took effect much earlier, starting on August 2, 2025.
In anticipation of this earlier milestone, the Code of Practice for General-Purpose AI Models was published on the EU commission’s website on July 10, 2025. It is a voluntary tool, prepared by independent experts in a multi-stakeholder process involving nearly 1000 participants, (general-purpose AI model providers, downstream providers, industry organizations, civil society, rightsholders and other entities, as well as academia and independent experts). The Code represents an initial effort to translate the AI Act’s GPAI-specific obligations into practical measures.
It focuses on three central areas (Transparency, Copyright, and Safety and Security) and offers a framework that developers of GPAI models may rely on to demonstrate responsible practices in line with the EU’s evolving regulatory approach.
The Code is not yet formally adopted or endorsed. It will undergo review by the EU AI Office and the European AI Board, which will assess its alignment with the AI Act and determine whether it should serve as a recognized soft-law benchmark for compliance. Following that review, the European Commission and Member States may choose to endorse, revise, or retain it purely as a voluntary reference tool.
For now, the Code functions as a transitional governance instrument. While non-binding, it is expected to influence regulatory interpretation and shape market expectations in the run-up to the staged application of the AI Act.
What the Code Seeks to Achieve: An Expansive Approach
The Code builds directly on the requirements of the AI Act but also introduces several elements that go beyond the Regulation, offering a more detailed view of how obligations may be implemented in practice.
Transparency
The Code reflects Article 53 of the AI Act, which requires GPAI providers to prepare and maintain technical documentation in line with Annexes XI and XII and to make specified information available to “downstream deployers” (who want to integrate the GPAI into their AI systems) and to the AI Office or national competent authorities upon request.
It offers a user-friendly Model Documentation Form (PDF) which allows providers to document the information necessary to comply with the AI Act obligation on model providers to ensure sufficient transparency. This form includes notably a description of “indented uses” (including the uses that are restricted and/or prohibited by the provider) and the type and nature of AI systems in which the GPAI model can be integrated.
Information is to be controlled for quality and integrity as well as retained and protected from unintended alterations. It is to be provided to downstream deployers subject to the confidentiality safeguards and conditions provided for under Articles 53(7) and 78 AI Act.
Beyond this, the Code introduces a fixed retention period of 10 years after placing on the market for documentation for each version, requires updates whenever significant changes occur, and suggests making certain non-sensitive information publicly available to foster trust when placing the GPAI model on the market (none of which is explicitly required by the Act).
Copyright
The code expands on the obligation in Article 53(1)(c) of the AI Act to maintain a policy ensuring compliance with EU copyright and related rights law, including reproducing and extracting only lawfully accessible copyright-protected content as well as identifying and respecting machine-readable rights reservations “when crawling on the World Wide Web.”
It adds further operational measures, recommending technical safeguards to prevent infringing outputs, discouraging the use of content from persistently infringing sources, and introducing grievance mechanisms for rightsholders which involve providing a point of contact. It specifies that “proportionate measures should be commensurate and proportionate to the size of providers, taking due account of the interests of SMEs, including startups.”
The code specifies that adherence to the Code, even if it may demonstrate compliance with the AI act, does not constitute, as such, compliance with Union law on copyright and related rights nor affect agreement with rightsholders.
Safety and Security
The Code contains a 40-page long chapter that mirrors the obligations in Articles 55 (1) and 56(5), and recitals 110, 114, and 115 of the AI Act, which require providers of systemic-risk GPAI models to identify, assess, and mitigate systemic risks, conduct evaluations (including adversarial testing), implement cybersecurity measures, and report serious incidents.
However, it supplements these requirements with more structured guidance, including defining acceptable risk thresholds, gathering of model independent information, conducting both pre- and post-market assessments, maintaining continuous post-market monitoring and defining systemic risk responsibility allocation. Its recitals outline amongst other the Principle of Appropriate Lifecycle Management where providers must take appropriate measures “along the entire model lifecycle (including during development that occurs before and after a model has been placed on the market),” and also cooperate with and take “into account relevant actors along the AI value chain (such as stakeholders likely to be affected by the model).”
The Code also recognizes that “simplified ways of compliance for Small and medium enterprises (SMEs) and small mid-cap enterprises (SMCs), including startups, should be possible as proportionate”.
Strategic Divergence and Its Wider Implications
The publication of the Code has already prompted different responses among developers of general-purpose AI models navigating the EU’s emerging regulatory landscape.
Some have chosen to align with the Code, which could be seen as a way to engage proactively with European regulators and, potentially, to help shape its refinement during the forthcoming review process. Such alignment might also be viewed as a means to reinforce relationships with downstream partners and enhance credibility in a market increasingly focused on demonstrable accountability.
Others have opted not to adopt the Code at this stage. This decision may reflect concerns that a voluntary instrument could, over time, evolve into a de facto binding standard, adding compliance expectations and legal uncertainties beyond the express requirements of the AI Act. In this view, declining to adopt the Code helps avoid committing to practices that could later prove unnecessarily burdensome or extend beyond what the legislation itself demands.
However, remaining outside the process could have longer-term implications. A lack of early engagement may reduce the opportunity to influence how the Code is ultimately shaped during its review by the AI Office and the European AI Board. It could also be perceived by policymakers as limited willingness to cooperate with European regulatory initiatives, which might, in turn, lead to closer scrutiny or fewer opportunities to participate in EU-supported initiatives and partnerships. In addition, distancing from such frameworks may leave more room for competitors who appear more aligned with European priorities to shape the regulatory conversation, potentially affecting how a provider is regarded in a strategically important market.
Conclusion (Engagement or Observation?)
The EU GPAI Code of Practice, though voluntary, is already influencing the debate around the future of AI regulation in Europe and beyond. For developers, the decision to engage with the Code (or to remain on the sidelines) requires balancing immediate legal considerations against longer-term strategic interests, including market positioning, reputation, and the ability to shape emerging regulatory norms.
Early participation may help build constructive relationships with regulators and downstream stakeholders while offering a voice in the refinement of soft-law frameworks that could later underpin formal regulatory practice. Conversely, staying outside the process preserves flexibility and avoids premature commitments but may also reduce visibility and influence in conversations that will shape future approaches.
While the Code is EU-centric, its principles may nonetheless find relevance beyond Europe. Developers might consider adopting some of these measures in other key markets, including the United States, where the policy climate currently places greater emphasis on fostering innovation and market-driven approaches rather than pursuing a comprehensive regulatory framework. Voluntary alignment with elements of the Code could help companies anticipate potential regulatory convergence, address evolving expectations in transatlantic markets, and build trust in an environment where AI governance is receiving growing attention.
Ultimately, how individual actors respond will depend on their assessment of the EU’s regulatory trajectory, their appetite for proactive engagement, and the extent to which they view Europe as a key market or a potential driver of global AI standards.