California Updates Regs For Using AI in Employment Decisions

?California continues to take steps to regulate the burgeoning use of artificial intelligence, machine learning, and other data-driven statistical processes in making consequential decisions, including those related to employment.

The California Civil Rights Council (CRC) recently issued updated proposed regulations governing automated-decision systems. In addition to these regulatory efforts, California lawmakers have introduced two bills designed to regulate AI in employment.

California’s efforts at oversight now consist of the following:

  • The CRC’s proposed modifications to employment regulations regarding automated-decision systems.
  • Assembly Bill No. 331 to add regulations relating to artificial intelligence and automated decision tools (ADTs).
  • Senate Bill No. 721 to create the California Interagency AI Working Group.

These approaches have a common goal of minimizing the potential negative consequences of artificial intelligence when deployed in the employment context.

Taken as a whole, the CRC appears to be leapfrogging the legislative process with its efforts. The following summarizes the latest updates regarding California’s three-pronged approach.

Civil Rights Council’s Proposed Rules

Since publishing draft modifications to its antidiscrimination law in March 2022, the CRC has continued to refine its definitions of key terms without altering the primary substance of the proposed regulations. The CRC’s most recent proposal, released Feb. 10, includes the following updates:

  • CRC introduces a definition for adverse impact, which includes “the use of a facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” the law.
  • CRC introduces a definition for artificial intelligence to mean a “machine-learning system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.”
  • CRC introduces definition for machine learning to mean the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
  • CRC broadens the definition of agent from a person acting on behalf of an employer to provide services “related to the administration of automated-decision systems for an employer’s use in recruitment, hiring, performance, evaluation, or other assessments that could result in the denial of employment or otherwise adversely affect the terms, conditions, benefits, or privileges of employment” to “the administration of automated-decision systems for an employer’s use in making hiring or employment decisions.”
  • CRC provides a fuller list of examples of tasks that constitute automated-decision systems and clarifies that automated-decision systems exclude word-processing software, spreadsheet software, and map navigation systems.
  • CRC provides an example of the capability of algorithms to “detect patterns in datasets and automate decision-making based on those patterns and datasets.”
  • CRC renames machine-learning data to “automated-decision system data.”
  • CRC lists updates to defense to unlawful employment practice and recordkeeping obligations.
  • CRC clarifies how an employer can defend against a showing that it engaged in an unlawful use of selection criteria that resulted in an adverse impact or disparate treatment of an applicant or employee on a protected basis. The employer can show that the selection criteria is job-related for the position in question and consistent with business necessity, and there is no less-discriminatory policy that serves the employer’s goals as effectively as the challenged policy or practice.
  • CRC extends recordkeeping obligations not just to any person who sells or provides an automated-decision system to an employer, but also to any person “who uses an automated-decision system or other selection criteria on behalf of an employer or other covered entity.”
  • CRC clarifies the scope of records to be preserved.

Assembly Bill 331

Assembly Bill 331 would impose obligations on employers to evaluate the impact of an ADT, provide notice regarding its use, and provide for formation of a governance program. It would prohibit employers from using an ADT in a way that contributes to algorithmic discrimination.

The bill would require a deployer and a developer of an ADT to perform an impact assessment on or before Jan. 1, 2025, and annually thereafter, for any ADT that includes:

  • a statement of the purpose of the ADT and its intended benefits, uses, and deployment contexts.
  • a description of the ADT’s outputs and how the outputs are used to make a consequential decision.
  • a summary of the type of data collected from individuals and processed by the ADT.
  • a statement of the extent to which the deployer’s use of the ADT is consistent with or varies with the statement required of the developer.
  • an analysis of the potential adverse impacts on the basis of sex, race, color, ethnicity, religion, age, national origin, limited English proficiency, disability, veteran status, or genetic information.
  • a description of the safeguards that are or will be implemented by the deployer to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the ADT.
  • a description of how the ADT has or will be evaluated for validity or relevance.

AB 331 would require a deployer to notify any person who is the subject of the consequential decision that an ADT is being used to make the consequential decision. Furthermore, if a consequential decision is made solely based on the output of an ADT, a deployer must accommodate a person’s request not to be subject to the ADT and to be subject to an alternative selection process or accommodation.

AB 331 would require a deployer or developer to establish and maintain a governance program to map, measure, manage, and govern the risks of algorithmic discrimination associated with the use of the ADT.  In relevant part, the governance program must conduct an annual and comprehensive review of policies, practices, and procedures to ensure compliance with this chapter, and make reasonable adjustments to administrative and technical safeguards in light of material changes in technology, the risks associated with the ADT, the state of technical standards, and changes in business arrangements or operations of the deployer or developer.

Senate Bill 721

Senate Bill 721 would create a working group to deliver a report to the legislature regarding artificial intelligence and be disbanded by Jan. 1, 2030.

The working group would consist of 10 members, including two appointees by the governor, two appointees by the president pro tempore of the Senate, two employees by the speaker of the Assembly, two appointees by the attorney general, one appointee by the California Privacy Protection Agency, and one appointee by the Department of Technology. 

The working group would be required to accept input from academia, consumer advocacy groups, and small, medium and large businesses affected by artificial intelligence policies. 

With the proliferation of new regulations and laws, it is more important than ever for employers to stay abreast of developments regarding AI, especially given the potential for a resulting patchwork of obligations for those who incorporate AI into their workforce management processes.

Alice Wang and Margaret “Ellie” McPike are attorneys with Littler in San Francisco. © 2023. All rights reserved. Reprinted with permission.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter