On April 29, 2024, the U.S. Department of Labor’s (DOL) Wage and Hour Division (WHD) published new guidance clarifying employers’ obligations under federal labor laws as they pertain to use of automated systems and artificial intelligence (AI).
Quick Hits
- The DOL’s Wage and Hour Division published new field assistance guiding its field staff on the implications of employers’ increasing use of automated systems and AI technologies.
- The guidance cautions that while such technologies have workplace benefits, human oversight is necessary to avoid results that violate federal labor laws.
- The guidance comes after President Biden issued an executive order (EO) calling on federal agencies to coordinate their approach to the development of AI and similar technologies.
The new Field Assistance Bulletin No. 2024-1, titled “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards,” provides guidance to the WHD’s field staff “regarding the application of the Fair Labor Standards Act (FLSA) and other federal labor standards” to the growing use of automated systems and AI in the workplace.
Such technologies can track employees’ work hours, productivity, performance, and geographic location, as well as assign tasks, manage projects, or administer leave. However, the guidance cautions that “responsible human oversight” is necessary to ensure they do not result in potential violations of federal labor laws and pose the additional risk of “creating systemic violations across the workforce.”
The field assistance guidance comes six months after President Biden issued Executive Order 14110 on October 20, 2023, calling for a “coordinated, Federal Government-wide approach” to the responsible development and implementation of AI. The WHD’s release further coincides with the release of similar guidance by the DOL’s Office of Federal Contract Compliance Programs (OFCCP) concerning the use of automated systems and AI.
Compliance Obligations
The WHD field guidance emphasized several key areas of federal labor laws that may be implicated by employers’ use of automated systems or AI.
Hours Worked
The guidance clarifies that employers must ensure that any automated systems or AI used to track work time, breaks, or geographic location, are accurately accounting for all hours worked and “paid in accordance with federal minimum wage, overtime, and other wage requirements, even when those wage rates vary substantially due to a host of inputs.” Such technologies “may undercount hours worked,” which could lead to violations.
Further, the guidance notes that the hours worked does not turn on the employees’ level of productivity, stating technologies that track “keystrokes, eye movements, internet browsing, or other activity to measure productivity are not determinative of whether an employee is performing ‘hours worked’ under the FLSA” and does “not substitute for the analysis” for determining “hours worked.”
Rather, employers are obligated “to exercise reasonable diligence” to ascertain employees’ hours worked, which can include, for example, “certain time spent waiting and breaks of short duration.” This includes adequate oversight and review of AI systems that create “smart” entries for hours worked, which can auto-populate time entry predictions based on a combination of prior predictive data, to ensure that predictive entries are accurate representations of the time actually worked.
Calculating Wages
The guidance explains that the importance of exercising human oversight to ensure that automated systems and AI used to calculate wage rates “pay employees the applicable minimum wage and accurately calculate and pay an employee’s regular rate and overtime premium” under the FLSA and other applicable laws.
The guidance identifies systems that may use AI to calculate workers’ pay rates based upon various data such as “fluctuating supply and demand, customer traffic, geographic location, worker efficiency or performance, or the type of task performed by the employee,” or that have the ability to recalculate and adjust workers’ pay throughout the day based upon a variety of factors. Where such systems are used, employers are expected to ensure proper oversight to maintain compliance with applicable minimum wage and overtime laws.
Administering Leaves
The guidance states that automated systems or AI used by employers used to process leave requests or certify leave must account for time as required by the FMLA and must not ask employees to provide more information to the employer than the FMLA allows. The guidance notes while such issues can occur due to human error, “the use of AI or other automated systems could result in violations across the entire workforce.”
Nursing Employees
The guidance cautions that automated timekeeping or scheduling systems may violate the FLSA’s and Providing Urgent Maternal Protections for Nursing Mothers Act’s (PUMP Act) requirements to provide reasonable break time for employees who need to express breast milk for a nursing child. Further, the guidance states that automated systems used to track productivity or monitor employees that “penalize a worker for failing to meet productivity standards or quotas due to the worker having taken pump breaks would violate the FLSA.”
Lie Detectors
Some AI technologies use “eye measurements, voice analysis, micro-expressions, or other body movements” to detect if someone is lying. The guidance cautions that use of such technology by employers may violate the Employee Polygraph Protection Act (EPPA) of 1988, which generally prohibits private employers from using lie detector tests on employees and job applicants.
Retaliation
The guidance states that the use of new technologies to take adverse actions against workers for engaging in activity protected by federal labor laws may result in prohibited retaliation. Such retaliation could occur, for example, if such technologies are used to generate “pretextual reasons to penalize or discipline an employee for engaging in protected activity could constitute unlawful retaliation,” according to the guidance. Further, use of such technologies to surveil employees to determine who might have filed a complaint with WHD could constitute retaliation, the guidance states.
Evolving Legal Landscape
The Biden administration has made promoting the responsible use of automated systems and AI a priority. According to the October 2023 EO, the administration is focused on balancing the benefits of new technology with risks that with irresponsible use could lead to “fraud, discrimination, bias, and disinformation.”
The EO was released a year after the October 2022 publication of the “Blueprint for an AI Bill of Rights,” which outlined nonbinding recommendations for the design, use, and deployment of AI and automated decision-making systems. Further, the U.S. Equal Employment Opportunity Commission (EEOC) has issued guidance on the potential disparate impact and Americans with Disabilities Act (ADA) compliance concerns from the use of such technologies to make employment decisions.
In addition to the WHD’s guidance, the OFCCP released guidance, titled “Artificial Intelligence and Equal Employment Opportunity for Federal Contractors,” which noted “the use of AI systems … has the potential to perpetuate unlawful bias and automate unlawful discrimination.”
Next Steps
Automated systems, AI, and similar technologies have the potential to increase efficiency and productivity in the workplace, but employers may want to take note of guidance from the various regulatory agencies in using such technologies.
In its latest guidance, the WHD takes the position that it considers employers responsible when automated systems or AI result in a violation of federal labor laws or when such technologies are used by employers to facilitate results that would otherwise violate federal labor laws.
The WHD guidance underscores that AI remains front and center for state and federal regulators in 2024 as California continues to work toward finalizing its proposed automated decision-making regulations under the California Consumer Privacy Act. Additionally, at least ten other states have AI or automated decision-making-specific laws in various stages of the legislative process.