HB Ad Slot
HB Mobile Ad Slot
Using AI for Worker Safety: Benefits and Risks
Thursday, July 25, 2024

Artificial Intelligence (AI) is entering the field of workplace safety. Specifically, using AI can help in areas such as:

  • Ergonomics. Through security cameras, AI can “see” when someone is lifting a box improperly, overreaching or jumping, or performing other acts that risk musculoskeletal injuries.
  • Area Controls. AI can detect blocked exits, spills, or other hazards in the environment and alert a supervisor to them before injuries occur.
  • Vehicle safety. For warehouse vehicles, AI can note near misses. It can also detect when a vehicle, such as a forklift, is operating in a pedestrian-only area, and it can detect when a worker is walking in a “no pedestrian” zone.
  • PPE. AI can detect whether workers are wearing their protective equipment when and where they should.

However, these uses come with legal compliance risks. Last year, the Biden administration issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which we previously covered. This Order tasked the Department of Labor to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” The Order directed these principles and best practices to relate specifically to “implications for workers of employers’ AI-related collection and use of data about them, including transparency, engagement, management, and activity protected under worker-protection laws.”

Moreover, the Order directed the Labor Department to “issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked” pursuant to the Fair Labor Standards Act. In response, the Labor Department has issued its “AI Principles for Developers and Employers.” Among these Principles are:

  • Centering Worker Empowerment, which the Labor Department considers the “North Star.” This principle is meant to ensure that “[w]orkers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace.”
  • Establishing AI Governance and Human Oversight. That is, companies using AI “should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.”
  • Ensuring Transparency in AI Use. The Labor Department urges employers to “be transparent with workers and job seekers about the AI systems that are being used in the workplace.”
  • Protecting Labor and Employment Rights. The Labor Department warns: “AI systems should not violate or undermine workers’ right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.”
  • Ensuring Responsible Use of Worker Data. The Labor Department urges companies to limit workers’ data collected, used, or created by AI systems in scope and location and to use it only to support legitimate business aims.

Together, the Executive Order and the Labor Department’s principles reveal a concern that employers may use AI to surveil their employees for legally impermissible purposes. This concern has existed since security cameras have existed, but AI’s ability to record workers’ activities and analyze them raises new concerns.

For example, an AI video system could note whenever workers congregate and flag it for supervisor review. The congregating employees might be wasting time. Alternatively, they might be talking with their co-workers about working conditions, their pay, or other matters protected by federal labor law. Or they might be talking about potential illegal discrimination. If a company uses AI to detect protected activity and then acts to infringe upon its workers’ federally protected rights, it may face legal scrutiny from the National Labor Relations Board, the Equal Employment Opportunity Commission, or other federal agencies statutorily empowered to investigate and punish such conduct.

As another example, a company that disregards the Labor Department’s guidance regarding limiting data collection may run afoul of various laws. Companies with Illinois employees may find that their unauthorized collection of their employees’ faces, fingerprints, or other biological indicators runs afoul of the Illinois Biometric Information Privacy Act, which includes a private right of action with substantial civil penalties. A company that has collected more video than it needs unnecessarily runs the risk of that information being compromised, either by mistake or by a deliberate cyberattack, which could also expose the company to liability.

Companies can use AI responsibly and effectively to promote worker safety. But they should not merely plug an AI program into their security system and let it go. Companies must obtain appropriate legal guidance to ensure that they are using their AI legally and safely. With appropriate AI governance informed by knowledge of relevant law, companies can collect on the promise that AI offers.

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins