The United States Department of Labor (“DOL”) recently published “Artificial Intelligence and Worker Well-Being: Principles and Best Practices for Developers and Employers,” which is intended to inform employers’ use of artificial intelligence (AI) when it comes to employment decisions. The guidance—which does not have the force of law—enumerates eight guiding principles for the “responsible use” of AI.
- “Centering worker empowerment.” This principle is referred to by the DOL as the “North Star.” The DOL encourages integrating workers’ input, especially those from “underserved communities,” in the design, development, testing, training, procurement, deployment, use and oversight of AI systems, including “people of color, Indigenous individuals, LGBTQI+ individuals, women, immigrants, veterans, individuals with disabilities, individuals in rural communities, individuals without a college degree, individuals with or recovering from a substance use disorder, justice-involved individuals, and opportunity youth.” If workers are represented by a union, the DOL advises employers to bargain in good faith regarding the use of AI and electronic monitoring in the workplace.
- “Ethically developing AI.” Developers should design AI systems that “enhance civil rights, equity, and safety and that are tested for accuracy, validity, and reliability.”
- “Establishing AI governance and human oversight.” Employers are encouraged to establish governance structures to oversee the implementation of AI systems and keep a human in the loop for any employment decisions. They should also conduct regular independent audits of AI systems.
- “Ensuring transparency in AI use.” Employers are advised to provide workers with “advance notice” of any AI use and implement procedures for workers to view and submit corrections to “individually identifiable data” used by AI in employment-related decisions.
- “Protecting employee rights.” Employers should mitigate the risk disparate impacts based on race, color, national origin, religion, sex, disability, age, genetic information, and other protected bases, by auditing AI systems and making those results public.
- “Enabling workers and improving job equality.” Employers should consider how any AI system will impact “job tasks, skills needed, job opportunities and risks [to] workers,” and obtain workers’ input. Where electronic monitoring is used, employers should use the least invasive means necessary to “accomplish legitimate and defined business purposes.”
- “Supporting workers impacted by AI.” Employers should provide workers with “appropriate training” to learn how to use AI systems that complement their work. Where workers are displaced by AI, employers should prioritize retraining and reallocating those workers to other jobs within the organization, if feasible.
- “Ensuring responsible use of worker data.” Developers should design and build AI systems with safeguards to secure and protect worker data from “internal and external threats” and avoid the collection and retention of worker data that is not necessary to “a legitimate and defined business purpose.”
It is notable that many of these recommendations align with jurisdiction-specific AI laws and regulations in places like New York City, Colorado, and Illinois, and adds to government efforts to address how employers should mitigate the potential risks of AI in employment decisions.