A growing number of employers have moved to restrict employee use of ChatGPT, while other companies are embracing generative artificial intelligence and recognizing opportunities to streamline work processes and augment workflows.
There’s certainly value in generative AI tools. But there are risks as well. Whether your organization chooses to ban or embrace tools like ChatGPT, there are new policy considerations to consider to ensure that employees are using these tools in alignment with company concerns and opportunities.
The Need for New Policies
The advent of generative AI isn’t the first time that organizations have had to quickly revise or recreate policies, said Lisa Sterling, chief people officer at Perceptyx, a Temecula, Calif.-based employee listening company that has been closely following GenAI development. “When social media first became prevalent, organizations scrambled to build guidelines and policies regarding employee use to ensure a clear delineation between personal and professional use,” Sterling said. “Now GenAI has introduced some new twists on those guidelines, and most people are still working to define the new parameters.”
And, as with any new technology, generative AI offers both benefits and some potential challenges and risks for organizations.
“Generative artificial intelligence involves the use of algorithms to create new content and interfaces from existing data,” explained attorney Paul E. Starkman with Clark Hill in Chicago. “When used in the workplace, if vetted and implemented properly, AI can have positive effects, such as improving productivity, streamlining operations and developing content that is remarkably human-like.” However, he added, “there are legal, business and reputational risks and other implications involved in the utilization and creation of a policy addressing generative AI.”
The data used by AI and the output of generative AI systems “must often be constantly or at least periodically monitored, reviewed and audited,” Starkman advised. “Therefore, many organizations have instituted workplace policies on when and how generative AI systems may be used, with some organizations banning the use of certain AI systems altogether.”
Policy Considerations
Organizations must decide whether to allow employees to use generative AI. And, if so, to what extent and for what purposes?
It’s also important, Sterling said, to provide a clear definition of generative AI. “This is a very new technology for many, so policies need to be precise on what it is and isn’t.” In addition, she said, “how, when and why it should be used should be clear for employees.” It can also be helpful to “provide use cases where GenAI can and should be leveraged.”
When considering the use of generative AI—both currently and in the future—Sterling said, “organizations need to be clear about what is acceptable and what is not.” This will involve a number of organizational functions. “When organizations are ready to craft their guidelines/policy, they will need the collective input and support of their legal, HR, operations, technology, compliance, data privacy and security teams,” Sterling said. “Each is critical in protecting the organization and its employees from legal, commercial and ethical risks.”
John Bremen, a managing director at WTW in the Chicago area, has been having a number of conversations with clients about generative AI and its implications. Generative AI policies, Bremen said, are going to evolve but, for now, there are three key areas where most policies are focusing: avoiding any risk to protected information, ensuring that users don’t inadvertently violate copyrights, and ensuring that the use of the information is honest and accurate.
Data privacy is a key consideration for any organization. Generative AI is continually “learning” through the information that it gathers—information that is provided by users. Depending on how individual apps are using, storing or sharing the data received, your organization’s data may become part of the public domain and accessible by AI and, consequently, users far and wide.
As the New York City-based business membership and research organization The Conference Board has warned: “Uploading proprietary data into the AI apps for processing could pose organizational risks that include forfeiting ownership of information, intellectual property, patents, and copyrights.”
“One thing for employers to watch out for as they utilize GenAI prompts is data privacy—both company level and employee level,” agreed Emily Kilham, director of research and insights with Perceptyx. “Having a clear policy in place about trade secrets, personal information and GenAI will be important so that no one inadvertently causes a data privacy issue. As we see countries and areas of countries develop laws around the use of GenAI, those policies will need to be adjusted.”
It’s also important for organizations to ensure that their employees’ use of generative AI tools doesn’t infringe on others’ copyrighted or protected information or intellectual property.
Another worry: the potential for misinformation and manipulation in the form of hallucinations. As The Conference Board explains: “Based on inaccurate information and dependent on the sources used to train it, AI sometimes ‘hallucinates,’ meaning it confidently generates inaccurate information without flagging that shift for the user of the app.”
When relying on AI-generated outputs, it’s important to be transparent about using it—including being open and honest about the limitations, potential biases and uncertainties that may exist.
Additional Considerations
Starkman shared some key questions for teams tasked with creating an AI workplace policy:
- What is the focus of the policy? Will it address all AI or just generative AI? Will the policy be part of a computer and technology use policy?
- Are all relevant stakeholders’ interests incorporated in the policy?
- How is AI currently used in the organization, and how might it be used in the future?
- How will the policy be rolled out, and how will the policy become a living document that remains relevant despite changes in technology and usage?
Legal counsel is critical when developing and implementing generative AI policies. As Starkman noted: “Among the legal risks that can arise from the use of generative AI to perform human resources functions and participate in employment decisions is the potential that the AI algorithms may reinforce biased or discriminatory employment practices against legally protected groups, such as older workers, persons with disabilities, women and others.”
There are other troubling considerations, Starkman said. “Using AI to monitor employee activities can raise privacy issues, given that some AI tools purport to measure employee engagement and emotional states.” There are concerns among both employees and regulators, he said, “that using AI technology for increased monitoring can reveal protected disabilities and other confidential personal information. As a result, companies may want to address these concerns in their messaging, training and content of their AI policies.”
Training, agreed Sterling, is a critical part of policy implementation. Policies are important but must be augmented by training, she stressed. “It all starts with training. A policy is only effective if people are educated.”
As with other policies like data privacy and sexual harassment, for instance, Sterling said companies should have annual education and certification processes to ensure employees know what’s expected of them. “Additionally,” she noted, “organizations can implement technology that monitors the usage and can detect inconsistencies and potential risks.”
Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.