As artificial intelligence (AI) grows in prevalence and accessibility, it is important for employers to consider the implications of its use by their employees. One method of anticipating and quelling potential liabilities that may arise is through deploying certain internal AI policies. This article focuses on certain issues employers should strongly consider when drafting and implementing an internal AI policy. In later articles, the use of AI in software development, intellectual property issues, and confidentiality concerns, among other issues, will be explored.
What is AI?
As the modern workplace becomes increasingly more open to and reliant on the use of technology generally referred to as AI for daily tasks, this begs the question, what exactly is AI? At present, there is no uniform definition of AI, however, it is generally understood to refer to computer technology with the ability to simulate human intelligence in order to analyze data and reach conclusions, find patterns, and predict future behavior along with the ability to learn from such data and adapt its performance over time. At its core it is computer software programed to execute algorithms. Additionally, generative AI is a certain type of AI that uses unsupervised learning algorithms to create new digital content based on existing content, which can include things such as images, video, audio, text, or computer code.
Employer Considerations
Employers have many things on their plate, which now includes managing how their employees use AI in the workplace. In looking, for example, specifically at healthcare IT companies, the types of employees can generally be divided into roughly three categories: (1) those involved in the administrative side of the business, (2) those involved in healthcare technology services, and (3) those involved in software development. The considerations relevant to developing an AI policy applicable to the administrative side (human resources, the C-suite, and marketing) are detailed below, while technology services and software specific concerns will be addressed in later editions of this series.
Human Resources
Companies are increasingly using AI for certain repetitive, and data-based human resources and employee management functions. Certain common uses include recruiting, hiring, and onboarding new employees. While it can be more efficient and potentially cost reducing to automate these tasks through the use of AI, there are certain practical, legal, and regulatory challenges that all employers should consider.
Arguably one of the more contentious uses of AI is in the screening, interviewing, and hiring process. While AI is revered for its ability to streamline these processes by automatically ranking, eliminating, and selecting candidates with minimal human intervention, employers should not get lost in the ease of this process without considering the host of federal, state, and local anti-discrimination laws that loom over this process. Violation of these laws could be detrimental to a business.
While some argue that AI programs actually reduce bias in these sorts of decision-making scenarios, it is important to remember that AI is a product of its data set. AI may take into account certain things employers are not legally allowed to consider during the hiring process such as an applicant’s age, race, religion, sexual orientation, or genetic information. This is because certain AI tools may use the internet, social media, or certain public databases to collect information. Further, it is possible that based on the data set and algorithms used, AI recruiting programs may duplicate past discriminatory practices.
Not only are employers exposed to discrimination claims, but they are also exposed to disability discrimination claims. In May 2022, the Equal Employment Opportunity Commission issued a Technical Guidance on AI decision-making tools and algorithmic disability bias that identifies the following three ways in which an employer using these tools may violate the ADA:
Failure to provide a reasonable accommodation needed for the algorithm to rate the individual accurately. Using a tool that “screens out” a disabled individual (whether intentionally or unintentionally) when the individual is otherwise qualified to do the job, with or without a reasonable accommodation. This may occur if the disability prevents the individual from meeting minimum selection criteria or performing well on an online assessment. Using a tool that makes impermissible disability-related inquiries and medical examinations.
While no federal law currently regulates the specific use of AI during the recruiting or hiring process, it is necessary to evaluate state and local laws and regulations when drafting internal policies for AI use in human resources. As these laws and regulations are rapidly changing, it is also necessary to monitor changes to state and local laws to ensure all recruiting and hiring practices comply with any applicable laws and regulations so as to protect a business from any liability resulting from any claims of discrimination or other legal issues.
C-Suite
When it comes to the C-Suite, there are certain higher level concerns at play. It is necessary to outline how, and more importantly how not, to use AI as a C-Suite member. One major concern is confidentiality, to include protecting valuable trade secrets. If an employee inputs confidential information into a generative AI program, that information becomes part of the AI program’s data set and ultimately its education. That information can then be recalled or used by the AI program to provide another user, potentially someone outside of the company, that information. This exposure can be detrimental to a business.
This can be particularly problematic when trade secrets are involved. One key to maintaining trade secret protection is to preserve the secrecy of the information. Once that information is input into an AI program, it is likely no longer a trade secret. This can cause a huge hit to a company that holds a lot of value in a trade secret or even in confidential information. As such, it is important to place boundaries and rules regarding how AI programs can be used by employees, but specifically C-suite members who would generally have higher level access to confidential information.
Marketing
Conversely, the marketing team is an example of an internal team that may be able to use AI programs with lesser risk to a company. Generative AI tools can be especially helpful and time saving for a marketing department. Especially when it comes to healthcare IT companies, the marketing team is generally the farthest removed from confidential patient information. While certain policies should be put in place for outlining proper use of generative AI tools, such as ChatGPT, these restrictions can be of lesser concern to employers.
Further, while concerns regarding intellectual property and AI are still early on in their journey through the courts, it is prudent to anticipate any intellectual property issues, specifically regarding copyright or trademark ownership, prior to allowing a marketing team to use AI tools. This helps to avoid issues and confusion at a later date and can negate the need for costly litigation in the future.
As AI programs become more common in the workplace, it is important for employers to begin implementing appropriate internal policies for employee management to avoid costly liability. These internal policies are just one small aspect of properly onboarding and using AI tools within a business. Other potential concerns will be highlighted in later editions of this multi-part series.