On October 20, 2022, the French data protection authority (the CNIL) announced a €20 million fine against Clearview AI Inc. (Clearview) for its processing of facial images of individuals residing in France. This is the fourth fine Clearview has received (so far) in Europe. It wraps up the investigation dating back to 2020, when the CNIL started the procedure based on multiple complaints from individuals and activist groups.
The CNIL also ordered Clearview to stop the scanning and use of images, and to delete all (French) data collected to that point within two months. Failure to do so will trigger additional penalties of €100,000 per day.
But what is at stake here and what does it tell on the use of artificial intelligence for biometric templates in the EU?
What Did Clearview Do?
Clearview’s activity under scrutiny consists in extracting “faces” from publicly available websites and social media platforms (i.e., scraping), including videos, and compiling a database of biometric profiles. It then offers the database to its clients (including the police), who can search a person based on a photograph using Clearview’s facial recognition tool. Besides “just” images, clients can access information linked to the images, such as geolocation metadata included in the picture or source websites. Clearview algorithm matches (according to its own PR) faces to a database of more than 20 billion images.
The CNIL found the business model violates the General Data Protection Regulation (GDPR). It identifies the following main violations:
-
Clearview did not (and probably could not) obtain the necessary consents from individuals (or identified any other relevant legal grounds). Therefore, the CNIL considered there is no suitable legal basis for processing. The CNIL disqualifies any attempt to rely on legitimate interest (even before assessing whether Clearview processes special categories of data). The CNIL underlined “the intrusive and massive nature of the process” and the fact that users “do not reasonably expect their images to be processed by the company to supply a facial recognition system.” The fact that data is (made) publicly available does not impact the requirement for a specific legal basis for the web scraping practices.
-
Data subjects’ rights, especially right of access and erasure, were not respected. Clearview’s limitation to individuals’ right of access consisted in setting slots (only twice a year) and a cut-off date (only data collected in the last 12 months). It also failed to meet the access and erasure requests that it did receive, as it often responded partially and selectively.
-
Clearview also failed to cooperate with the CNIL throughout the procedure, especially by disregarding CNIL’s formal notice issued in 2021.
The year 2022 has been bumpy for Clearview, as the fine issued by the CNIL is just another out of many legal challenges against the company. To recall, Clearview was previously fined for its activities in the UK, Italy and, most recently, in Greece, with another decision pending in Austria. Its business model was also challenged in the US, where Clearview agreed to settle the lawsuit and stop selling its database to private businesses and individuals. The list of countries goes on, with Canada, Australia and the German state Hamburg having flagged privacy laws’ violations, while Sweden and Belgium condemned their own police authorities for the use of Clearview’ technology.
Clearview does not benefit from the “one-stop shop” mechanism in the EU (absent of establishment in the EU), so other sanctions from supervisory authorities are still on the hook.
The Future of Facial Recognition
What happened to Clearview is not unique; facial recognition for commercial purposes or when interlinked with enforcement authorities is not a very EU-tasty privacy cocktail, as if biometric templates concentrate all (privacy) challenges of the use of AI technologies.
Under the GDPR, biometric data is typically considered a “special category of data.” It is afforded special protection and can only be processed in very limited cases. When it comes to biometric facial recognition templates, there are only two potentially available legal bases: a (very hard to obtain) explicit consent or a substantial public interest based on the law of member states (that is unlikely to encompass commercial practices). Privacy-enhancing technologies and implementation of privacy by design help but cannot always be successfully implemented.
The Council of Europe’s framework makes a clearer distinction between biometric processing in the public and private sectors. In its Guidelines on facial recognition, it provides that the processing by public authorities must be based on the law, while it is up to the legislators to provide clear parameters and to ensure necessity, proportionality and appropriate safeguards. In this regard, the Council of Europe considers biometric data processing may be acceptable for law enforcement purposes, while other security purposes (e.g., in schools and public buildings) should not be considered justified. For the private sector, the Council of Europe holds firm explicit consent as the only available legal basis.
The pressure on facial recognition is mounting considering the EU’s upcoming framework on artificial intelligence – the Artificial Intelligence Act (AI Act). The proposed AI Act envisages a risk-based approach, completely banning some use cases and subjecting others (designated as “high risk”) to stringent control mechanisms both before and after market placement. Under the draft AI Act, many facial recognition systems would either be prohibited or considered “high-risk” systems. For example, in one of the latest drafts, the use of real-time biometric identification systems in public spaces for the purpose of law enforcement is banned. This prohibition could only exceptionally be lifted for important public security reasons, through appropriate judicial or administrative authorizations. As for other facial recognitions systems, most of them would be “high risk” (only leaving out verification and authentication systems). Many political groups in the European Parliament (one of the co-legislators involved with the AI Act) are still calling for a complete ban on biometric recognition systems, both in public and private spaces and whether real-time or not.
With EU institutions still disagreeing on the approach, it remains uncertain what the new framework will ultimately mean for biometric identification systems. However, one thing is certain: enforcement trends and legislative developments point to a clear view toward more oversight over biometric monitoring in the EU.