HB Ad Slot
HB Mobile Ad Slot
Is Your Use of AI in The Workplace Compliant and Guided by Policies? (Germany)
Tuesday, May 21, 2024

The recent decision of the Hamburg Labour Court concerning a German works council’s attempt to enforce a ban on the use of AI in a workplace makes it clear once again that employers cannot simply let the use of AI run its course unchecked.

Employers are well advised to take a moment check their current IT landscape given that implementation of the AI Act is now on the horizon: The EU Parliament voted on the Act on 13 March. Now a final lawyer-linguist check takes place and the text is then expected to be finally adopted before the end of the legislature. The law also needs to be formally endorsed by the Council. The Act will enter into force 20 days after publication in the Official Journal. While its rules will apply in stages (within 6 months – so by December this year – systems bearing unacceptable risks will have to be shut down), after two years all of it will be in force.

The idea behind the Act is to regulate the use and development of artificial intelligence on a risk-based approach. This means that the Act will apply to “providers”, “deployers”, “importers”, “distributors” and “authorised representatives”, essentially entailing not only software developers, but also users (hence employers and employees), and those involved in the distribution chain. Companies worldwide are going to be affected if they offer or operate AI systems or whose AI-generated output is used within the EU, including in relation to any employees there.

Will you be in scope?

The chances that this applies to your company are high, particularly as an AI system will be widely defined as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments”. If you use IT solutions with embedded AI products and you use AI-generated results, you are in scope. For employers, that will include any AI system used in recruitment, selection, performance management and reward, for example

To comply with requirements set out in the Act , AI tools and products must be assessed to determine the level of risk they pose to the rights of individuals, in particular in relation to discrimination, data and privacy protection, etc. Many AI systems are likely not to be considered as posing unnecessary risks but the use of such systems for the management of individuals is seen as towards the top end of the risk scale, not unacceptable but certainly high. In any event they must be assessed. The Act requires companies to disclose that content is generated by AI even for lower risk systems such as video games, recommendation systems or spam filters.

Given the broad definition of AI systems and the wide scope of application of the AI Act, the mere use of a chatbot to address common candidate or employee queries may fall under it. While you may still think that this has nothing to do with you, bear in mind that the AI Act sets out hefty fines (i.e., up to EUR 35 Mio. or up to 7% of world-wide turnover) and that it also applies to small and medium-sized enterprises (although lower fines are provided for them).

Assess what AI you already use

To determine your exposure, it will make sense to take a closer look at the systems currently in use. Commonly, employers are so used to their systems that they may not even realise that they already contain AI. Once you have inventoried your systems and determined any potential gaps – which you will want to do anyway to comply with other legal and regulatory requirements around the GDPR and the protection of trade secrets – you will have a first idea as to the extent of your exposure.

Next, take preliminary steps towards assessing what risk category applies to your systems. The AI Act provides for risk classifications based on the intended purpose and functionality of the AI product. The current draft Annex contains a list of cases considered high risk AI systems, including areas such as education, medical technology, critical infrastructure, law enforcement, border control, the administration of justice and democracy and specifically, employment. Should you determine that system you use falls into the high-risk category, you will then need to observe the stricter obligations and requirements that come with this (e.g. conformity, assessment, registration with a public or private database and, in some instances, a Fundamental Rights Impact Assessment). You will also need to address any potential ethical and reputational risks this system bears.

Regulate sensible handling

It is not advisable to aim only at bare minimum compliance with the AI Act, since you may miss the mark and fall below it. Much better to actively implement measures to protect your organisation’s brand from a loss of trust resulting from strictly legal but borderline unethical AI practices, especially in the potentially sensitive world of employment relations.

Now is also a good time to implement detailed AI policies relevant technical and measures around your use of AI in your capacity as employer.

  • You will need to consider who internally owns the roll-out and has oversight on AI, and who on the board will ultimately be responsible.
  • You may need to take a global approach in risk-assessing your systems, since the reach of some AI tools used in staff matters will likely cross country borders in international businesses.
  • Further, what existing resources do you have available to enable the efficient implementation of an AI compliance program? There may be the need to create new roles or adapt existing ones.
  • Consider who should be consulted internally (e.g., IT, Security, DPO, Legal, Works Councils, Ethics & Compliance, HR personnel).
  • You will need to assess whether existing internal policies and procedures may need to be customised to meet the new AI requirements.

When drafting AI policies, think detailed, not in broad principles. Consider how people, process and technology will fit together, e.g.:

  • Who can use AI: Will you allow use by all departments?
  • Input data: What type of request should be permitted, text, images, drawings, photos, videos, audios and code? If code is permitted – should certain licence conditions be a prerequisite?
  • Rights on input: Who should hold the rights to the input data, the company or also third parties?
  • Confidentiality: Should certain input data be prohibited for use because it contains protected trade secrets or otherwise considered confidential or sensitive?
  • Data protection: If input data contains personal data, how can your employees verify if there is a legal basis for your holding/processing their data?
  • Output: For what employment purposes will you allow the use of the output?
  • Checks: How should the factual accuracy, quality and legality of the output data be checked?
  • Approvals: Who should approve the use of each HR tool, and who should approve the use of the output in each case?

From all this, you may already guess that simply buying an external AI HR management system will not be enough by itself to bring your company into compliance, either with the formalities of the AI Act or various other areas of law, trustworthy practices and customer satisfaction. There is no “plug and play” tool that relieves a company of all the internal work necessary to reach compliance. However, an early start on assessing your intended use of AI systems in HR will hopefully minimise the burden of compliance when the time comes.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins