As part of the UK data protection authority’s new three-year strategy (ICO25), launched on 14 July, UK Information Commissioner John Edwards announced an investigation into the use of AI systems in recruitment. The investigation will have a particular focus on the potential for bias and discrimination stemming from the algorithms and training data underpinning AI systems used to sift recruitment applications. A key concern is that training data could be negatively impacting the employment opportunities of those from diverse backgrounds.
Bias is a particular risk in AI or machine learning systems designed not to solve a problem by following a set of rules, but instead to “learn” from examples of what the solution looks like. If the data sets used to provide those examples have bias built in, then an AI system is likely to replicate and amplify that bias. For example, if successful candidates reflected in the training data share certain characteristics (such as gender, demographic profile or educational profile) then there is a risk of excluding candidates whose profiles do not match those criteria.
The ICO also plans to issue refreshed guidance for AI developers on ensuring that algorithms treat people and their information fairly. However, even where algorithms and training data reflect ethical guidance, it will remain best practice to retain meaningful human involvement in decision-making. In effect, AI systems should produce recommendations for human review, rather than decisions. Under EU and UK GDPR Article 22, decisions based solely on automated processing, including profiling, which produce legal effects concerning him or her or similarly significantly affects the data subject are restricted unless they are:
-
necessary for entering into or performance of a contract between an organisation and the individual;
-
authorised by law (for example, for the purposes of fraud or tax evasion); or
-
based on the individual’s explicit consent.
The making or withholding of employment offers would clearly constitute legal or similarly significant effects.
Where special category personal data is involved, decisions based solely on automated processing are permissible only:
-
with the individual’s explicit consent; or
-
where the processing is necessary for reasons of substantial public interest.
In addition, because decisions based solely on automated processing are considered to be high risk, UK GDPR requires a Data Protection Impact Assessment (DPIA), showing that risks have been identified and assessed, and how they are addressed. From there, compliance obligations include:
-
giving individuals specific information about the processing;
-
taking steps to prevent errors, bias and discrimination; and
-
giving individuals rights to challenge and request a (human) review of the decision.
The ICO’s indication that investigating AI in the context of recruitment will be one of its priorities over the next three years is significant. AI and machine learning tools are an increasingly valuable resource, but they come with compliance obligations that are likely to come under intense scrutiny as an area of particular interest to the ICO as the UK’s data protection authority.