The rise of artificial intelligence has brought both opportunities and challenges to the workplace. However, a growing trend of employees using free or unauthorized AI tools poses significant risks, from security breaches to the loss of trade secrets. Recent reports indicate that some workers are engaging with AI in ways that are not authorized by the employer, highlighting the importance of establishing AI use policies and protocols that will enable responsible and deliberate adoption.
One report by Ivanti revealed:
-
- 46% of office workers say some or all the AI tools they use are not provided by their employer.
- 38% of IT workers are using unauthorized AI tools, and
- 32% of people using generative AI at work are keeping it a secret.
Another recent study out of the Melbourne Business School found that among those who use AI at work:
-
- 47% say they have done so in ways that could be considered inappropriate, and
- 63% have seen other employees using AI inappropriately.
What could possibly go wrong?
Finally, in a report aptly named “From Payrolls to Patents,” Harmonic found that 8.5% of prompts into popular generative AI tools included sensitive data. Of those prompts:
-
- 46% included customer data, such as billing information and authentication data.
- 27% included employee data, such as payroll data and employment records.
- 15% included legal and finance data, such as sales pipeline data, investment portfolio data, and M&A materials, and
- 12% include security policies and reports, access keys, and proprietary source code.
Inappropriate uses of AI in the workplace can result in a wide range of risks, including cybersecurity incidents, threats to national security, and the loss of IP protections.
How can business leaders minimize AI risks and encourage responsible AI adoption in the workplace?
In addition to applying technical solutions to address these risks, business leaders can implement a variety of organizational measures to support the responsible adoption of AI the workplace. Such measures may include, for example:
-
- Adopting an AI policy. As a starting point, consider a policy that:
-
- Prohibits the download and use of free AI tools without approval,
- Prohibits sharing confidential, proprietary, and personal information with free AI tools,
- Limits inputs, prompts, or asks of free AI tools, and
- Limits the use and distribution of output from free AI tools.
-
- Updating existing policies, such as IT, network security, and procurement policies, to account for AI risks.
- Ensuring contracts for AI tools are reviewed prior to their use. AI developers often require disclosures or other measures in their terms and conditions, which may necessitate changes to users’ privacy statements or terms of use.
- Developing a data classification strategy. Help employees spot and label confidential, proprietary, and personal information.
- Training employees on AI risks and best practices.
- Ongoing monitoring on the use of AI in your workplace. Monitoring may include regular review of contracts for AI tools (which can often change) or testing for accuracy, relevance, and bias in AI outputs.
- Implementing an incident response plan that covers foreseeable AI scenarios.
- Adopting an AI policy. As a starting point, consider a policy that:
Employers should take the initiative and actively communicate with employees about AI risks and acceptable use, adopt clear AI policies, update existing security protocols, and provide employee training. Such actions not only can protect sensitive data, but they can also empower employees to use innovate responsibly. By prioritizing preparedness, organizations can benefit from AI gains—from enhanced productivity to cost savings—while reducing risks.
This article summarizes aspects of the law. This article does not constitute legal advice. For legal advice regarding your situation, you should contact an attorney.
Sign up