A new US National Cybersecurity Alliance survey shows that over one-third (38%) of “employees share sensitive work information with artificial intelligence (AI) tools without their employer’s permission.” Not surprisingly, “Gen Z and millennial workers are more likely to share sensitive work information without getting permission.”
The problem with employees sharing workplace data with chatbots is that if a worker inputs sensitive personal information or proprietary information into the model, that information is then used to train the model. If another user enters a query that the original information is responsive to, then the sensitive or proprietary data is provided in the response. That’s how generative AI works. The data disclosed is used to teach the model and is no longer private.
According to Dark Reading, several cases illustrate how significant the risk of employees sharing confidential information with chatbots is:
“A financial services firm integrated a GenAI chatbot to assist with customer inquiries, …Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools.”
Another real example of the inadvertent disclosure of proprietary and confidential information by a misinformed employee is:
“An employee, for whom English was a second language, at a multinational company, took an assignment working in the US…. In order to improve his written communications with his US based colleagues, he innocently started using Grammarly to improve his written communications. Not knowing that the application was allowed to train on the employee’s data, the employee sometimes used Grammarly to improve communications around confidential and proprietary data. There was no malicious intent, but this scenario highlights the hidden risks of AI.”
These examples are more common than we think, and the percentage of employees using generative AI tools is only growing.
To combat the risk of inadvertent disclosure of company data by employees, it is essential for companies to develop and implement an AI Governance Program, an AI Acceptable Use Program, and provide training to employees about the risks and appropriate uses of AI in the organization. According to the NCA survey, more than half of all employees have NOT been trained on the safe use of AI tools. According to the NCA, “this statistic suggests that many organizations may underestimate the importance of training.”
Employees’ use of unapproved generative AI tools by employees poses a risk to organizations because IT professionals are unable to adequately secure the environment from tools that are under their radar. Now is the time to develop governance over AI use, determine appropriate and approved tools for employees, and train them on the risks and safe use of AI in your organization.