AI in the Workplace: Are You Prepared?

​Last month, California Gov. Gavin Newsom signed an executive order regarding artificial intelligence. While this action does not carry the weight of legislation or regulation, it should nevertheless prompt employers to recognize that AI has already grabbed and will continue to grab the attention of all levels of government.

When it comes to AI in the workplace, there are steps that employers can take now to ensure compliance with existing laws and get a head start on anticipated regulations. AI can improve workplace efficiency and lead to more consistent, merit-based outcomes in the workforce. However, if the proper safeguards are not in place, AI can perpetuate or augment workplace bias.

Newsom’s Executive Order

Newsom’s executive order directs California state agencies to study the benefits and risks of AI in numerous applications. This study must include an analysis of risks AI poses to critical infrastructure and a cost-benefit assessment regarding how AI can impact California residents’ access to government goods and services.

In the employment context, the executive order instructs the California Labor and Workforce Development Agency to study how AI will impact the state government workforce and asks the agency to ensure the use of AI in state government employment results in equitable outcomes and mitigates “potential output inaccuracies, fabricated text, hallucinations and biases” of AI.

EEOC Guidance on the Use of AI

The executive order’s contemplation of AI hallucinations and biases is a nod to the Equal Employment Opportunity Commission’s (EEOC’s) Artificial Intelligence and Algorithmic Fairness Initiative, launched in 2021. To date, the EEOC has published two technical assistance documents regarding how using AI in the workplace can result in unintentional disparate impact discrimination.

The first guidance, issued in May 2022, concerns the Americans with Disabilities Act (ADA). In this guidance, the EEOC clarified that AI refers to any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” In the workplace, this definition generally means using software that incorporates algorithmic decision-making to either recommend or make employment decisions. Some common AI tools used by employers include automated candidate sourcing, resume-screening software, chatbots and performance analysis software.

To comply with the ADA, the EEOC explained that employers using AI in the workplace should provide reasonable accommodations to applicants or employees who cannot be rated fairly or accurately by an AI tool. For example, a job applicant who has limited manual dexterity because of a disability may score poorly on a timed knowledge assessment test requiring use of a keyboard, trackpad or other manual input device. Or interview analysis software may unfairly rate an individual with a speech impediment. In both scenarios, the EEOC recommends the employer provide an alternative means of assessment.

The second EEOC guidance, issued May 18, is on the use of AI in compliance with Title VII of the Civil Rights Act of 1964. As related to AI, the EEOC’s primary concern is not with intentional discrimination, but rather with unintentional disparate impact discrimination. In such cases, an employer’s intent is irrelevant. If a neutral policy or practice, such as an AI tool, has a disparate outcome on a protected group, that policy could be unlawful.

Undisciplined use of resume-screening tools is a commonly cited example of how AI can lead to disparate impact discrimination. Used properly, resume screeners can improve efficiency and suggest the best candidates for the job. If the tool, however, is fed with input or training data that favors a particular group, it may exclude individuals who do not satisfy such biased criteria. The tool may also unintentionally favor certain proxies for protected categories—for example, zip codes and race.

Steps to Take Now

Employers using AI should consider action now to position themselves toward compliance with existing law and the likely passage of additional laws. Consider these steps.

1. Be transparent. A common theme in the EEOC’s guidance is that a lack of transparency with applicants and employees can bring about discrimination claims. For example, if an applicant with a disability does not know they are being assessed by an algorithmic tool, they may not have the awareness that allows them to request a reasonable accommodation. EEOC guidance aside, transparency on the use of AI is actually a legal requirement in some jurisdictions—including New York City. In a law that went into effect earlier this year, New York City employers are required to disclose AI use, perform bias audits of its AI tools and publish the results of those audits. Other jurisdictions, including Massachusetts, New Jersey and Vermont, have proposed similar employment-related legislation regarding AI.

2. Vet AI vendors. Employers often cannot defend against discrimination claims simply by saying, “the AI did it.” So it is important that employers ask vendors whether the tool has been designed to mitigate bias and gain as much knowledge as feasible regarding the tool’s functionality. Some vendors may be reluctant to share details, deeming such information proprietary. In those scenarios, employers should either look elsewhere or demand strong indemnity rights in the contract with the vendor.

3. Audit. One way in which AI tools can cause a disparate impact is by using homogenous input data. After determining a set of inputs, such as resumes of high-performing employees, the tool should be audited to ascertain whether it results in disparate impact.

Finally, employers need to stay apprised of developments in the law. Executive orders and guidance documents are often a prelude to legislation and regulatory action. To avoid becoming a test case, it’s a good idea to partner with qualified employment counsel and data scientists when using AI tools in the workplace.

Kevin White and Daniel Butler are attorneys with Hunton Andrews Kurth in Washington, D.C., and Miami, respectively.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter