How to Manage Generative AI and ChatGPT in the Workplace

?Question: What does generative artificial intelligence have in common with the 2023 Academy Award winner for best picture? Answer: They’re both “Everything Everywhere All at Once.”

That joke is not courtesy of ChatGPT, but rather, a corny human. But how else to describe the frenzy surrounding the new crop of conversational AI tools—which many pundits predict will spur a fourth industrial revolution?

“I think it’s been less than 120 days since ChatGPT was released, as we’re talking,” said Michael Chui, a partner at the McKinsey Global Institute, McKinsey & Company’s business and economics research arm in San Francisco. “Yet it feels like three years, right?” According to an article Chui co-authored for McKinsey in December, more than a million users logged in to OpenAI’s ChatGPT platform within five days of its release.

“It’s kind of like, it took 40 years of effort, and then it happened in a day,” said Josh Bersin, global industry analyst and CEO of the human capital advisory firm The Josh Bersin Company in Oakland, Calif. “I talked to 15 vendors in the last two weeks,” he said, naming Workday, SAP, Oracle and Phenom, “and they’re all building new AI chat interfaces into their systems.”

At the same time, more chatbots become available by the day—most recently, Microsoft’s Bing AI and Google’s Bard. In mid-March, Microsoft also announced Copilot, which will integrate generative AI into its Microsoft 365 apps, including Word, Outlook, PowerPoint and Excel. The far-reaching workplace implications stunned even IT insiders.

“I have Microsoft on my board,” said Tori Miller Liu, president and CEO of the Association for Intelligent Information Management, a nonprofit in Silver Spring, Md. “I knew it was coming … and, still, watching it, I was like, ‘Dang. That’s something.’ It felt like a fundamental shift.”

It’s the job of HR to be ready for anything, but the newness of generative AI combined with the dizzying hype and potential legal ramifications can make it feel particularly intimidating to address. There are still so many unknowns, yet workers need guidance now along with reassurance that their skills—and jobs—still matter. Company leaders must plow ahead doing what they always do: the best they can.

“Step one is probably for any leader, or anyone in general, to approach it from a sense of humility,” Liu said.

That includes doing a lot of listening and learning. People manager positions “ask for us to both be thinking about what our team is raising as problems … but also to be proactive and thinking carefully about what our team might not know yet,” said Damien Williams, assistant professor of philosophy and data science at the University of North Carolina at Charlotte.

The Impact on Office Technology

Being honest about the upfront effort required to implement AI in the workplace is a fundamental tenet of change management for any technology, said Tim Sackett, SHRM-SCP, president of HR Technical Resources, an engineering and IT staffing firm in Lansing, Mich.

Tell your team: “It’s going to be new to us, and we’re going to be slow at it,” he said. “Eventually we’ll see exponentially better efficiency, but we have to give it all our effort and know there’s going to be frustration.” Sackett often asks people to compare how they felt the first time they used a smartphone with how adept they feel now.

Framing AI as a “tool” or “assistant” (or, as Microsoft says, a “copilot”) may allay people’s fears about job loss. The larger point to emphasize is that the AI technology may already be changing the way everyone works. Because many generative AI tools have been rolled out as free, open-source software, some people are already integrating them into their lives.

“One of the remarkable things … is how much leaders—whether human-capital or HR leaders or just general managers—have some personal experience with these technologies,” Chui said. “That isn’t always true.”

[SHRM members-only HR Q&A: What is artificial intelligence and how is it used in the workplace?]

A survey by TalentLMS, a learning management system backed by learning tech vendor Epignosis, found that employees are not only familiar with ChatGPT, but they’re also using it on the job. Of the 1,000 employees surveyed, 70 percent have already used ChatGPT at work, even though only 14 percent reported having any training on generative AI.

Moreover, 61 percent of those who used ChatGPT said it improved their time management, while 57 percent reported it boosted their productivity. “This technology is totally horizontal and can be used to automate and speed up tasks in most job roles and industries,” said Thanos Papangelis, CEO of Epignosis co-founder of TalentLMS.

That’s why it’s imperative to understand its risks and limitations—and talk to your teams about them.

“The way I’ve been talking about it with my own staff is that employees are still responsible for the output of their work. Even as the technology becomes more sophisticated, I think humans have a responsibility to leverage their own human abilities,” Liu said. That means learning “to write really good prompts for AI, manage that output, edit it, audit it and provide feedback to the tool if it got something wrong.”

Which it often does.

“The term we use in the field is that these systems ‘hallucinate,’ ” Chui said, meaning they provide inaccurate information with no indication it’s wrong. When a friend of Chui’s asked AI to write Chui’s biography for his birthday, the chatbot stated Chui went to Harvard and MIT and co-wrote the book The Second Machine Age—”all honorable things,” Chui said, “but none of them are true.”

“Don’t assume that, just because the software has produced something that’s intelligently presented, that it’s completely correct,” Bersin said. And just as employees are accountable for their work, leaders are accountable for the company’s. “Your brand, your reputation and your quality are on the line.”

Beware of AI Biases

Not only does AI get things wrong, but it “gets some things more wrong than others,” Williams said. The ways that women, racial minority groups, individuals with disabilities and others are reflected in the datasets can “have outsized effects on already-marginalized populations … in particularly harmful, derogatory, dismissive and minimizing ways,” he said.

That’s not to say you should ignore or ban the technology, experts agreed, but proceed with caution and tempered enthusiasm. “When the first word processor came out, I don’t think a lot of people said, ‘Stick to your typewriter,’ ” Bersin said. Rather, tell employees to try these tools to see how they can help them do their jobs better, he advised. “Don’t be afraid of them.”

Appoint a cross-functional team of “champions” to experiment, Liu suggested. “Give them some resources, let them play around,” she said, with the goal of developing recommendations on training and policies.

When it comes to leading groups like this, Williams recommends focusing on marginalized employees most likely to be negatively affected by these tools. Doing so can help organizations identify—and avoid—bias problems before they occur.

And it’s not too soon to start hashing out a policy, as long as you do it with a flexible mindset. “The policy should cover guidelines on what type of data is appropriate to input into the platform, who has access to the data and how the data is stored and managed,” Papangelis said.

Added Chui: “What do you expect of the provider or vendor of these technologies … with regard to confidentiality or with regard to intellectual property?”

Liu put forward additional questions: Are the algorithms and datasets developed in accordance with your organization’s diversity, equity and inclusion policies? And if a case of bias is uncovered, how will you correct for it? “Those are the kind of uncomfortable questions we have to start having with vendors and certainly with in-house software developers and programmers,” she said.

One thing is clear: If leaders don’t engage in an open dialogue with workers around generative AI, “then we’re just hoping they will understand, and a lot of them won’t, and those are the ones who will be left behind,” Sackett said. “I think we have to be really conscious [about that] … and get to the point where they go, ‘Oh, this is what the new job looks like. This is what the new work looks like.’ “


Christina Folz is a freelance writer and editor based in Springfield, Va.

SHRM toolkit tips

Mitigating Bias in AI

Employers that are considering using AI-powered tools in their workplace should consider taking the following actions:

  • Develop multidisciplinary innovation teams that include legal and human resource staff.
  • Continue human review of AI-assisted decision-making.
  • Implement disclosure and informed consent when necessary and appropriate.
  • Audit what is being measured before implementing the program, and on an ongoing basis.
  • Impose tight controls on data access.
  • Engage in careful external vendor contract reviews.
  • Work with vendors that take an inclusive approach to design. Consider whether the designers and programmers come from diverse backgrounds and have diverse points of view.
  • Insist on the right to review external validation studies.

Read the full SHRM members-only toolkit: Using Artificial Intelligence for Employment Purposes

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter