HB Ad Slot
HB Mobile Ad Slot
An EU Blow Against the Black Box and a Road Map to Compliance
Wednesday, February 12, 2020

William Buckley famously described a conservative as a person who “stands athwart history, yelling Stop, at a time when no one is inclined to do so, or to have much patience with those who so urge it.”

While I am not sure that I classify as “conservatives” the judges discussed below, in the first week of February, we witnessed their important decision to stand athwart the historic rise of behavior-monitoring artificial and order it stopped. AI’s growing influence on organizations has been taken for granted as unrelenting and inevitable, but we may have seen the first major effective push back against that tide.

On February 5, 2020, a Court in the Hague ruled that a Dutch automated surveillance system for detecting welfare fraud violated human rights and must be terminated. The court did not believe that the government of the Netherlands adequately weighed the system’s intrusion on human rights against the functional value of the system, stating:

“[T]he court is of the opinion that the safeguards provided . . . with a view to protecting the privacy of those whose data can be processed in [the system] are insufficient. Taking into account the transparency principle, the purpose limitation principle and the principle of data minimization, fundamental principles of data protection, the court deems the legislation insufficiently clear and verifiable for the conclusion that the interference that the use of [the system] entails in the right to respect for private life is necessary, proportionate and proportional to the purposes that the [system] serves.”[1]

One of the principal issues for the court was lack of transparency in how the machine learning system determined who was likely defrauding the Dutch welfare and tax offices.  The court said that such a black box would not allow for affected citizens to be properly notified that their data was being analyzed or provide enough data for people accused of fraud to be able to defend themselves.

The court wrote:

“The principle of transparency requires accessible and comprehensible information, communication and simple language, and the provision of information to data subjects about the identity of the controller and the purposes of the processing. Irrespective of this principle, further information must be actively provided on the basis of this principle to ensure proper and transparent processing of data and natural persons must be made aware of the risks, rules, safeguards and rights relating to the processing of personal data and the manner in which they are processed on which they can exercise their rights with regard to processing.”

The Dutch court was also concerned about the European rules against profiling, stating that EU law “contains provisions with regard to profiling and a prohibition on automated individual decision-making, including profiling. In Article 4, section 4, AVG, profiling is defined as any form of automated processing of personal data in which certain personal aspects of a natural person are evaluated on the basis of personal data, in particular with a view to his professional performance, economic situation, health, personal preferences, interests analyze or predict reliability, behavior, location or movements.” The system in question, like nearly all anti-fraud software, involved automated processing of personal data evaluating behavior, reliability and performance of the people being checked. So this type of system clearly profiles individual as that term is defined in EU human rights protections.

So what are we to make of this ruling? The court is clearly charged with weighing the functional purpose of such a system against its impact on the human rights protected by the EU Charter. The court also reasonably acknowledged the value of the fraud prevention system, noting the harm caused to any public welfare system by fraud, and in particular the Dutch government’s evidence of “undisputed amounts of EUR 153 million in social security fraud, EUR half to EUR 1 billion a year in social assistance fraud and EUR 135 million in social damage as a result of social security fraud.”

And, as observed in the decision, any such system may violate the general prohibition of fully automated decision making, including profiling. Using large data sets increases the risk of profiling. Not being able to explain in detail how the AI system makes its decisions for a public entity seems to clearly violate the obligation of transparency required in the EU. And the fact that the system in question is operated by a government adds a layer of extra responsibility for protecting the EU recognized human rights.

On the other hand, I am not sure you could operate any fraud management system under the EU requirements of purpose limitation and data minimization with a court looking over your shoulder.  It will always be difficult to establish that each item of information fed into a machine learning algorithm to train it and to examine transactions was clearly necessary to the task, and that those tasks could not have been performed with less data. So, in some ways, opening large AI driven analytics systems to such scrutiny could be a bad precedent, encouraging fraud on the public purse.

In addition, the court seemed concerned that affected individuals would not know and be able to evaluate how the fraud management was being conducted. Anyone in security will tell you that it is bad form to inform people committing fraud about what you are doing to find and stop them.  Such notice would give them the ability to change their tactics to avoid detection. This part of the EU law may be problematic in the future as the need to secretly detect criminals runs up against the individual’s right to know how information about them is being used. This will be a special concern for data base analysis, a rapidly growing method of criminal detection.

According to The Guardian, UN human right official Robert Alston commented that this Dutch decision “sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds.” While Mr. Alston seemed pleased with the result, it will make government functioning significantly more difficult in the EU if other courts adopt the same logic.

Are EU governments and companies going to be denied the latest tools in fraud prevention? Quite possibly. The Dutch court based its decision on multiple individual rights guaranteed by the EU Charter. European courts may feel that using AI-based analytics is too intrusive and impersonal to ever be justified in a balance between practical application and human rights. As discussed above, modern fraud detection techniques by their nature involve analysis of massive collections of transactional data points and analysis that occurs without notice to the people who that data describes, and EU courts may always find this dynamic offensive to protected human rights.

Or EU governments and businesses may work to meet the “privacy by design” requirement from the GDPR, and build both transparency and accountability into their AI models from the beginning. They could demonstrate that only enough data was used to meet the purpose of the program and that data developed by the fraud detection system is minimized and quickly destroyed. They could show that such fraud detection programs are only part of a larger set of human factors underlying decisions made by people, if aided by machine. This would ameliorate concerns about lives affected by fully automated decision-making. Finally, providing the public some notice about the institution of these tools and roughly how they operate may convince courts that EU residents have been adequately notified.

These actions could be a road map to compliance.  While there is no guaranty that an EU court would find that the utility of AI-based fraud prevention could outweigh the system’s cost to protected human rights, taking the steps described immediately above, and doing so in a public and transparent fashion, might give a practically-minded set of judges the basis for measuring equities more favorably to those looking to prevent fraud. It seems clear that without these steps, fraud detection programs based on machine learning and analytics will face an uphill climb to remain in use for European entities.

And what of U.S.-based companies using sophisticated fraud prevention programs? The big financial service companies have been applying this technology for many years and have built it into their daily operations.  When you receive a notice that your credit card may have been used improperly, you are experiencing fraud prevention analytics at work. Microsoft has a nifty new AI-operated service to help companies identify fraudulent transactions. Many American companies are moving in this direction. Will they also fall afoul of the kind of thinking in Europe demonstrated by the Dutch court case described here?

If the European courts find that AI-driven analytics violate essential human rights of EU citizens, then it is likely that all companies applying these programs to EU residents will be proscribed from doing so – wherever they run the programs. We certainly know that European “public interest” organizations love to push big American companies into the cross-hairs of EU regulators and judges, so it is likely that a trend of striking analytic fraud prevention would quickly affect U.S. tech and financial business.

Companies wishing to avoid being in the first wave of these likely attacks should at least try to address the “black box” problem and add more transparency and explicability to the functioning of their AI services. How did the AI system conclude that this particular transaction was likely fraudulent? If you can’t say, then the EU is more likely to strike down use of the system. And were your fraud decisions made only by the AI-driven analytics, or did a human being intervene to add common sense and legal compliance analysis to the process? Operating AI systems with complete lack of human intervention is asking for GDPR trouble in Europe, and may soon be a larger concern for U.S. states as well.

The Dutch case is an early shot across the bow of any company or government blindly applying artificial intelligence to human behavioral issues. Run your programs with transparency and human common sense, or be prepared to lose them.

[1] The quotations from the court are taken from a Google translation into English of the Dutch decision. It is possible that some of the text was clumsily translated and does not catch the essence of the court’s original language. If so, I apologize, and would like to know about any mistakes so I can fix them.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins