The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.
The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects of the AI Act, which comes to provide much-needed legal certainty to all authorities, providers and deployers of AI systems/models in navigating the regulation.
It refines the scope of general concepts (such as “placing on the market”, “putting into service”, “provider” or ” deployer”) and exclusions from the scope of the AI Act, provides a definition of others not expressly included in the AI Act (such as “use”, “national security”, “purposely manipulative techniques” or “deceptive techniques”), as well as takes a position on the allocation of responsibilities of providers and deployers using a proportionate approach (establishing that these responsibilities should be assumed by whoever is best positioned in the value chain).
It also comments on the interplay of the AI Act with other EU laws, explaining that while the AI Act applies as lex specialis to other primary or secondary EU laws with respect to the regulation of AI systems, such as the General Data Protection Regulation (GDPR) or EU consumer protection and safety legislation, it is still possible that practices permitted under the AI Act are prohibited under those other laws. In other words, it confirms that the AI Act and these other EU laws complement each other.
However, this complementarity is likely to pose the greatest challenges to both providers and deployers of the systems. For example, while the European Data Protection Board (EDPB) has already clarified in its Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (adopted in December 2024) that the “intended” purposes of AI models at the deployment stage must be taken into account when assessing whether the processing of personal data for the training of said AI models can be based on the legitimate interest of the providers and/or future deployers. The European Commission clarifies in Section 2.5.3 of the CGPAIP that the AI Act does not apply to research, testing (except in the real world) or development activities related to AI systems, or AI models before they are placed on the market or put into service (i.e. during the training stage). Similarly, the CGPAIP provides some examples of exclusions from prohibited practices (i.e., permitted practices) that are unlikely to find a lawful basis in the legitimate interests of providers and/or future users of the AI system.
The prohibited practices:
- Subliminal, purposefully manipulative or deceptive techniques (Article 5(1)(a) and Article 5(1)(b) AI Act)
This prohibited practice refers to subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons, or exploit vulnerabilities due to age, disability or a specific socio-economic situation.The European Commission provides examples of subliminal techniques (visual and auditory subliminal messages, subvisual and subaudible queueing, embedded images, misdirection and temporal manipulation), as well as explains that the rapid development of related technologies, such as brain-computer interfaces or virtual reality, increases the risk of sophisticated subliminal manipulation.
When referring to purposefully manipulative techniques (to exploit cognitive biases, psychological vulnerabilities or other factors that make individuals or groups of individuals susceptible to influence), it clarifies that for the practice to be prohibited, either the provider or the deployer of the AI system must intend to cause significant (physical, psychological or financial/ economic) harm. While this is consistent with the cumulative nature of the elements contained in Article 5(1)(a) of the AI Act for the practice to be prohibited, it could be read as an indication that manipulation of an individual (beyond consciousness) where it is not intended to cause harm (for example, for the benefit of the end user or to be able to offer a better service) is permitted. The CGPAIP refers here to the concept of “lawful persuasion”, which operates within the bounds of transparency and respect for individual autonomy.
With respect to deceptive techniques, it explains that the obligation of the provider to label “deep fakes” and certain AI-generated text publications on matters of public interest, or the obligation of the provider to design the AI system in a way that allows individuals to understand that they are interacting with an AI system (Article 50(4) AI Act) are in addition to this prohibited practice, which has a much more limited scope.
In connection with the interplay of this prohibition with other regulations, in particular, with the DSA, the European Commission recognizes that dark patterns are an example of manipulative or deceptive technique when they are likely to cause significant harm.
It also provides that there should be a plausible/reasonably likely causal link between the potential material distortion of the behavior (significant reduction in the ability to make informed and autonomous decisions) and the subliminal, purposefully manipulative or deceptive technique deployed by the AI system.
- Social scoring (Article 5(1)(c) AI Act)
The CGPAIP defines social scoring as the evaluation or classification of individuals based on their social behavior, or personal or personality characteristics over a certain period of time, clarifying that a simple classification of people on said basis would trigger this prohibition and that the concept evaluation is inclusive of “profiling” (in particular to analyze and/or make predictions on interests or behaviors), that leads to detrimental or unfavorable treatment in unrelated social contexts, and/or unjustified or disproportionate treatment.Concerning the requirement that it leads to detrimental or unfavorable treatment, it is established that such harm may be caused by the system in combination with other human assessments, but that at the same time, the AI system must play a relevant role in the assessment. It also provides that the practice is prohibited even if the detrimental or unfavorable treatment is produced by an organization different from the one that uses the score.
The European Commission states, however, that AI systems can lawfully generate social scores if they are used for a specific purpose within the original context of the data collection and provided that any negative consequences from the score are justified and proportionate to the severity of the social behavior.
- Individual Risk Assessment and Prediction of Criminal Offences (Article 5(1)(d) AI Act)
When interpreting this prohibited practice, the European Commission outlines that crime prediction and risk assessment practices as such are not outlawed, but only when the prediction of a natural person committing a crime is made solely on the basis of a profiling of said individual, or on assessing their personality traits and characteristics. In order to avoid circumvention of the prohibition and ensure its effectiveness, any other elements being taken into account in the risk assessment will have to be real, substantial and meaningful for them to be able to justify the conclusion that the prohibition does not apply (excluding therefore AI systems to support the human assessment based on objective and verifiable facts directly linked to a criminal activity, in particular when there is human intervention).
- Untargeted Scraping of Facial Images (Article 5(1)(e) AI Act)
The European Commission clarifies that the purpose of this prohibited practice is the creation or enhancement of facial recognition databases (a temporary, centralized or decentralized database that allows a human face from a digital image or video frame to be matched against a database of faces) using images obtained from the Internet or CCTV footage, and that it does not apply to any scraping AI system tool that can be used to create or enhance a facial recognition database, but only to untargeted scraping tools.The prohibition does not apply to the untargeted scraping of biometric data other than facial images, or even if it is a database that is not used for the recognition of persons. For example to generate images of fictitious persons and clarifies that the use of databases created prior to the entry into force of the AI Act, which are not further expanded by AI-enabled untargeted scraping, must comply with applicable EU data protection rules.
- Emotion Recognition (Article 5(1)(f) AI Act)
This prohibition concerns AI systems that aim to infer the emotions (interpreted in a broad sense) of natural persons based on their biometric data and in the context of the workplace or educational and training institutions, except for medical or security reasons. Emotion recognition systems that do not fall under this prohibition are considered high-risk systems and deployers will have to inform the natural persons exposed thereto of the operation of the system as required by article 50(3) of the AI Act.The European Commission refers here to certain clarifications contained in the AI Act regarding the scope of the concept of emotion or intention, which does not include, for example, physical states such as pain or fatigue, nor readily apparent expressions, gestures or movements unless they are used to identify or infer emotions or intentions. Therefore, a number of AI systems used for safety reasons would already not fall under this prohibition.
Similarly, the notions of workplace, educational and training establishments must be interpreted broadly. There is also room for member states to introduce regulations that are more favorable to workers with regard to the use of AI systems by employers.
It also clarifies that authorized therapeutic uses include the use of CE marked medical devices and that the notion of safety is limited to the protection of life and health and not to other interests such as property.
- Biometric Categorization for certain “Sensitive” Characteristics (Article 5(1)(g) AI Act)
This prohibition is for biometric categorization (except where purely ancillary to another commercial service and strictly necessary for objective technical reasons) that individually categorize natural persons on the basis of their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.The European Commission clarifies that this prohibition, however, does not cover the labelling or filtering of lawfully acquired biometric datasets (such as images), including for law enforcement purposes (for instance, to guarantee that data equally represents all demographic groups).
- Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes (Article 5(1)(h) AI Act)
The European Commission devotes a substantial part of the CGPAIP to the development of this prohibited practice, which refers to the use of real-time RBI systems in publicly accessible areas for law enforcement purposes. Exceptions, based on the public interest, are to be determined by the member states, through local legislation.
The CGPAIP concludes with a final section on safeguards and conditions for the application of the exemptions to the prohibited practices, including the conduct of Fundamental Rights Impact Assessments (FRIAs), which are defined as assessments aimed at identifying the impact that certain high-risk AI systems, including RBI systems, may have on fundamental rights, and which, it is clarified, do not replace the existing Data Protection Impact Assessment (DPIA) that data controllers (i.e., those responsible for processing personal data) must conduct and have a broader scope (covering not only the fundamental right to data protection but also all other fundamental rights of individuals) and which complement, inter alia, the required DPIA, the registration of the system or the need for prior authorization.