?With employers relying more often on artificial intelligence (AI) to help hire new workers, states have begun implementing new laws and regulations to govern how employers can deploy such tools.
“You can expect more city and state regulation of this because they’re not going to stand around and wait for the federal government to step in. The states are moving to protect their citizens,” Kelly Dobbs Bunting, an attorney with Greenberg Traurig in Philadelphia, told attendees during a concurrent session at the SHRM Annual Conference & Expo 2023 in Las Vegas. “There is a tsunami coming of state regulation.”
Some businesses use AI software that scores job candidates based on their facial expressions, vocal intonation, word choice, eye movement and emotional responses during video interviews to determine if they are trustworthy.
There’s potential for algorithms in AI software to discriminate against Black people and people with disabilities.
“Maybe somebody’s got a speech impediment. Maybe somebody can’t hear very well. Maybe somebody’s got some sort of visual impairment, so if you’re using video to screen applicants, these people aren’t going to score very well, and it’s unlikely that they are going to be moved on to the next round” of the hiring process, Dobbs Bunting said.
Employers must offer an accommodation or an alternative screening process for applicants with disabilities, she said.
Focus on Enforcement
Four federal agencies have pledged to collaborate closely to prevent discrimination resulting from the use of AI and automated decision tools in the workplace. The U.S. Equal Employment Opportunity Commission, the U.S. Department of Justice, the Consumer Financial Protection Bureau and the Federal Trade Commission (FTC) recently highlighted their commitment to enforcing existing civil rights and consumer protection laws as they apply to AI in the workplace.
“Employers will be responsible for rooting out and curing any bias created by the AI software that they use in employment-related decisions,” Dobbs Bunting said. “This includes putting job advertisements on social media platforms that use AI tools to decide which resumes to push forward.”
In general, the FTC has found deceptive uses of facial recognition technology to be a violation of the Federal Trade Commission Act.
In 2019, the Electronic Privacy Information Center (EPIC) in Washington, D.C., filed a complaint with the FTC against HireVue, a South Jordan, Utah-based hiring software company. EPIC said HireVue engaged in unfair and deceptive trade practices because it didn’t tell users that it collected facial data during video interviews. Candidates do not have access to their algorithmic scores. HireVue later agreed to stop using facial analysis of job candidates. HireVue did not respond to a request for comment.
To prevent discriminatory outcomes, employers must “conduct ongoing analyses of the software, even if the vendor says, ‘Don’t worry; I got you. The software’s good.’ Do not believe that, because you will be held liable along with the vendor, if the vendor is mistaken about its impact on hiring,” Dobbs Bunting said. “You have a duty to understand the software, understand how it was tested and continue to test it.”
Human Touch
Employers shouldn’t base employment decisions on AI alone. “There’s got to be a human involved. It cannot be just a machine-driven analysis and result,” Dobbs Bunting said.
Having a machine render a final decision is not strategic or humane, said Otto Berkes, co-founder of Xbox and CEO of HireRoad, an Arlington, Va.-based talent acquisition platform.
“AI is a tool and shouldn’t be framed as being in competition with people. Ever,” Berkes said in an email, separately from the conference. “HR professionals may opt to leverage AI for rote tasks—things like job descriptions—but even then, I would recommend that a person be the final reviewer.”
Dobbs Bunting recommended companies develop policies that spell out when employees can and cannot use ChatGPT or other AI to perform their work, such as generating reports.
“Sooner or later, this [AI] is going to become the norm, but it’s moving so fast,” she said.