Why Workplace AI Policies are Essential
June 29, 2023
By Anthony Kaylin, courtesy of SBAM-approved partner ASE
Sample policies located here.
ChatGPT and other AI tools are being used by employees to perform their jobs. In one survey, it was found that around 43% of employees use AI such as ChatGPT at work, mostly without telling their boss. The problem isn’t just the use of these tools, but the fact employees are providing potentially confidential information to the tools to create solutions. Once the confidential information is in the AI tool, which is a generative AI tool, continuously learning, it is available in the tool when being used by other organizations, including competitors.
Samsung Electronics has banned the use of ChatGPT and other AI-powered chatbots by its employees after an engineer copied and pasted sensitive source code into ChatGPT. Like many companies, Samsung is worried that anything uploaded to AI platforms like OpenAI’s ChatGPT or Google’s Bard will get stored on those companies’ servers, with no way to access or delete the information, and the information will be accessible to others using these AI tools.
Because of these issues, although some AI tools may say they will have fixes to confidential information being used and released inappropriately, HR leaders should take note. There are too many AI tools out there and growing. Larger organizations are creating their own AI tools for their employees. For example, Amazon banned ChatGPT in January and has urged its developers to use its in-house AI called CodeWhisperer if they want coding advice or shortcuts. The Commonwealth Bank of Australia restricted the use of ChatGPT in June and directed technical staff to use a similar tool called CommBank Gen.ai Studio, which was developed in partnership with Silicon Valley tech company H2O.ai.
But smaller organizations may not have the resources to create an internal AI tool or the depth to police the usage of AI tools. Therefore, some organizations have banned AI tool usage. “Crucially, at this stage, there is no assurance of patient confidentiality when using AI bot technology, such as ChatGPT, nor do we fully understand the security risks,” said Paul Forden, who heads up Perth’s South Metropolitan Health Service. “For this reason, the use of AI technology, including ChatGPT, for work-related activity that includes any patient or potentially sensitive health service information must cease immediately.”
AI tools are just not advanced enough to certain use cases yet. Take the case of the lawyer who used ChatGPT to write a brief which cited nonexistent cases. Not only did the lawyer get found out, they will have to defend themselves before court for the filing of an extremely fake brief, which also would violate the code of ethics as well as the trust of and duty to the client.
Unfortunately, with no regulation and a wild west mentality with the rise of AI, employers have to make decisions on whether to use it (and how) or not. And if not allowed by the employer, the problem of being behind the eight ball becomes apparent when competitors are using these tools.
HR should be proactive now, gaining insight from the organization as to AI usage and working with legal counsel on common sense policies for AI usage in the organization – from confidentiality issues to work and product creation issues. Further, as AI becomes hot, there will likely be a number of contradicting regulatory frameworks. For multinational employers, they have to understand and reconcile how AI is impacted by data privacy and rules outside the U.S. This area is new and complicated, and HR needs to be ahead of the curve.