This blog post provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence (“AI Act”) on video game developers. More and more are integrating AI systems into their video games, including to generate backgrounds, non-player characters (NPCs), histories of objects to be founds in the video game. Some of these use cases are regulated under specific circumstances, and create obligations under the AI Act.
The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act depends predominantly on two factors: the role of the video game developer, and the AI risk level.
The role of the video game developer
Article 2 of the AI Act delimits the scope of the regulation, specifying who may be subject to the AI Act. Video game developers might specifically fall under two of these categories:
- Providers of AI systems, who are developers of AI systems who place them on the EU market or put the AI system into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
- Deployers of AI systems, who are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).
Thus, video game developers will be considered (i) providers if they develop their own AI system and they will be considered (ii) deployers if they integrate existing AI system made by a third party into their video games.
The AI risk level and related obligations
The AI Act classifies AI systems into four categories based on the risk associated with them (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk resulting from the AI systems used:
- AI systems with unacceptable risks are prohibited (Article 5 AI Act). In the video game sector, the most relevant prohibitions are the provision or use of AI systems deploying manipulative techniques or exploiting people’s vulnerabilities, and therefore causing significant harm. As an example, it is prohibited to use AI generated NPCs which would manipulate players towards increased spending in a game.
- AI systems with high-risk (Articles 6, 7 and Annex III AI Act) trigger strict obligations for providers and, to a lesser extent, for deployers (Sections 2 and 3 AI Act). The relevant high-risk AI systems used in video games are those which pose a significant risk of harm to the health, safety or fundamental rights of natural persons, given their intended purpose, and in particular the AI systems used for emotional recognition (Annex III(1)(c) AI Act). These could, e.g. be used to make exchanges between players and NPCs more fluid and natural, resulting in strong emotion in players who might feel genuine empathy, compassion, or even anger towards virtual characters.
- The list of obligations for providers of high-risk AI systems includes implementing quality and risk management systems, appropriate data governance and management practices, as well as technical documentation, ensuring transparency and information to deployers, keeping the documentation, ensuring resilience against unauthorized alterations or cooperating with competent authorities.
- Deployers of high-risk AI systems shall notably operate the system in accordance with the instructions given, ensure human oversight, monitor the operation of the high-risk AI system or inform the provider and the relevant market surveillance authority of any incident or any risk to the health, safety, or fundamental rights of persons.
- AI systems with specific transparency risk include chatbots, AI systems generating synthetic content or deep fakes, or emotion recognition systems. They trigger more limited obligations, listed in Article 50 AI Act.
- Providers of chatbots must ensure that the latter are developed in such a way that the players are informed that they are interacting with an AI system (unless this is obvious for a reasonably well-informed person). Providers of content-generating AI must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated.
- Deployers of emotion recognition systems must inform players of the operation of the system, and process the personal data in accordance with Regulation 2016/679 (GDPR) which applies alongside the AI Act. Deployers of deep fakes-generating AI must disclose that the content has been artificially generated or manipulated.
- AI system with minimal risk are not regulated under the AI Act. This category includes all other AI systems that do not fall into the aforementioned categories.
The EC stated that, in principle, AI-enabled video games face no obligation under the AI Act, but companies could voluntarily adopt additional codes of conduct (see AI Act | Shaping Europe’s digital future). It should be borne in mind, however, that in specific cases such as those described in this section, the AI Act will apply. Moreover, the AI literacy obligation applies regardless of the level of risk of the system, including minimal risk.
The AI literacy obligation
The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act), regardless of the AI’s risk level. AI literacy is defined as skills, knowledge and understanding that allow providers, deployers and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The ultimate purpose is to ensure that video games developer’s staff are able to take informed decisions in relation to AI, taking into account their technical knowledge, experience, education and training and the context the AI system is to be used in, and considering the persons or groups of persons on whom the AI system is to be used.
The AI Act does not detail how providers and deployers should comply with the AI literacy obligation. In practice, various steps can be taken to achieve AI literacy:
- Determining how and which employees currently use or plan to use or develop AI in the near future;
- Assessing employees’ current AI knowledge to identify gaps (e.g. through surveys or quiz sessions);
- Providing training activities and materials to the employees using AI on AI basics, and at least the concepts, rules and obligations which are relevant.
Conclusion
The regulation of AI systems in the EU has potentially a significant impact on video game developers depending on the way AI systems are used within particular video games. It is early days for the AI Act and we are carefully watching this space particularly as the AI Act is evolving to adapt to new technologies.