Make Sure Generative AI Policies Cover Intellectual Property

?Generative artificial intelligence, such as ChatGPT, should be used only when policies are in place to ensure a company’s intellectual property isn’t lost and that trade secrets aren’t being disclosed, legal experts say. This is true despite generative AI’s potential usefulness in knowledge work, screening job candidates and accommodating individuals with disabilities.

“AI-generated works may qualify for copyright registration if they have sufficient human authorship to support a copyright claim,” said Elizabeth Shirley, an attorney with Burr & Forman in Birmingham, Ala. However, if people “merely use prompts to create AI-generated works, the content is likely not subject to copyright protection.”

Different Kinds of Intellectual Property Risks

There are three different potential intellectual property threats posed by generative AI, according to Bradford Newman, an attorney with Baker McKenzie in Palo Alto, Calif.

First, there are potential copyright infringement claims from third parties. For example, copyright holders of images used by generative AI tools may sue to be compensated if there has been unlawful infringement.

Second, there’s the risk that the company cannot copyright the output of its own generative AI. If AI is the “mastermind,” then whatever it generates is not copyrightable, Newman said. What is written entirely by a person can be copyrighted, but even human edits to what goes though generative AI may not render the edits copyrightable, he explained.

Third, there are risks that, based on the terms of usage from the makers of the AI tools themselves, the company does not own its output. With respect to this risk, Newman said there are several possibilities:

  • The company nonetheless owns the output.
  • The company and the toolmaker jointly own the output.
  • The toolmaker owns the output.

“It should be assumed that the maker of the tools is collecting, and in many cases has the right to use, the prompts entered into its tool by users,” Newman said.

Tools like ChatGPT log each query and conversation users have with them, said Mark Girouard, an attorney with Nilan Johnson Lewis in Minneapolis. So, if an employee included confidential information in queries and conversations, that information could surface in responses to other users’ queries.

A best practice would be “to expressly prohibit employees from including any trade secrets or other confidential information in their queries and conversations with generative AI tools,” Girouard said.

Employees need to be aware of what company information is considered confidential or a trade secret so they do not use it with generative AI, Shirley said. She added that oversight for such a policy would be difficult.

“A more restrictive but conservative policy would be to prohibit any employee from using generative AI where that employee has access to confidential or proprietary information,” she said.

Companies that use AI more than sporadically should have a chief AI officer and well-designed governance and oversight policies that contain certain core components that bolster existing trade secret policies, Newman recommended. “This includes understanding precisely what AI technologies are being used, in what manner, and that there is a process to ensure both the input and output meet legal requirements and do not compromise confidential information,” he said.

Other AI Policy Concerns

Unless they are used carefully, AI tools can also introduce errors in work, offer biased assessments of job applicants, allow applicants to fudge their credentials, and other challenges employers must manage.

Uses of AI in knowledge work. AI will have many uses in the workplace. For example, “generative AI will have a significant impact on knowledge work,” said Michael Chichester Jr., an attorney with Littler in Detroit. “It has the potential to augment any activity that relies upon assimilating and synthesizing a large body of information.”

Knowledge workers now have a more efficient tool to review historical information, which could lead to faster decisions and greater productivity, he said.

“However, just because the decisions are faster does not mean that they are correct, and maintaining the human element remains important,” Chichester cautioned.

Screening candidates. Generative AI already is being used to identify and screen employment and promotion candidates, track worker productivity, and predict a candidate’s likelihood of success in a particular role, said Michael Schulman, an attorney with Morrison Foerster in New York City.

“Employers should ensure that these AI tools are not adversely impacting protected class members,” he said.

Girouard said diversity, equity and inclusion concerns around the use of generative AI tools will be a major focus of legislation and litigation.

“On the employee-selection front, I have seen increasing concern about job applicants and employees using generative AI to cheat on pre-employment and promotional tests,” he said. “One solution for this is to use AI to detect AI-based cheating, which creates its own risks, as more jurisdictions are regulating how employers use AI in hiring.”

Reasonable accommodations. Chichester said the proliferation of generative AI may have a significant impact on accommodating individuals with disabilities. “Generative AI can aid accessibility through automatic closed-captioning or audio descriptions to aid individuals with hearing or visual impairments,” he said. “Additionally, generative AI that converts speech to text could be a reasonable accommodation in some circumstances.”

Generative AI may be an accommodation requested by those having difficulty writing, but due to intellectual property and trade secret concerns, it might not be a reasonable option, Shirley said.

Girouard said that while generative AI might be a reasonable accommodation in some circumstances “the law requires employers to provide an effective and reasonable accommodation, not necessarily the employee’s preferred accommodation.” So long as the employer offers another, effective accommodation, it would not be required to make generative AI tools available, Girouard noted.

Performance reviews. As for using generative AI for performance reviews, employees subject to such performance reviews would be at risk of having their confidential information disclosed, Shirley said.

While generative AI may produce vast amounts of often accurate information quickly, placing employees’ confidential information at risk is contrary to the growing trend of protecting people’s data privacy, she added.

“Employers and employees should be careful not to sacrifice security for convenience,” Shirley said.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter