EU Reaches Deal on Regulation of AI

​European Union (EU) policymakers reached a deal on the proposed AI Act, which provides steep penalties for violations, on Dec. 8.

In the U.S., Congress has not yet drafted bipartisan legislation on AI but is in the early stages of doing so. President Joe Biden signed a first-of-its-kind executive order Oct. 30 on the development of AI.

We’ve gathered articles on the news from SHRM Online and other media outlets.

EU Deal

The deal appeared to ensure the European Parliament could pass the legislation before it breaks in May 2024 ahead of legislative elections, perhaps before the end of this year. Once passed, the law would take two years to come into effect.

The EU deal on AI came together after lengthy talks between representatives of the European Commission, which proposes laws, and the European Council and European Parliament, which adopt them. Companies violating the AI Act could face fines up to 7 percent of global revenue, depending on the violation and size of the company.

In the U.S., senators signaled that the United States would take a far lighter approach than the EU and focus instead on incentivizing developers to build AI in the U.S.

(The Washington Post)

Risk-Based Approach

EU policymakers agreed to a risk-based approach to regulating AI, where a defined set of applications face the most restrictions. Companies that make AI tools that pose the most potential harm to individuals, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would be required in creating and deploying the systems.

(The New York Times)

Emotion Recognition Systems

The EU law would ban several uses of AI, including bulk scraping of facial images and most emotion recognition systems in workplace and educational settings. There are safety exceptions—such as using AI to detect a driver falling asleep. Citizens would have a right to submit complaints about AI systems and receive explanations about decisions on high-risk systems that affect their rights.

(Axios and The Verge)

Model for Regulatory Authorities

The AI Act, originally introduced in April 2021, is expected to play a major part in shaping AI in the EU and affect companies globally that have operations in Europe.

(SHRM Online)

Executive Order in U.S.

In the U.S., Biden’s executive order will shape how the technology evolves in a way that can maximize its potential but limit its risks. The order requires the tech industry to develop safety and security standards, introduces new consumer and worker protections, and assigns federal agencies a to-do list to oversee the rapidly progressing technology.

The executive order sets an example for the private sector by, among other things, establishing standards and best practices for detecting AI-generated content and authenticating official government communications. It also requires vendors that develop AI software to share their safety test results, which will help government agencies and private companies that use AI tools.

(SHRM Online and SHRM Online)

DOL’s Response to the AI Executive Order

The DOL has scheduled three public listening sessions on AI, focusing on the risks and impacts to workers presented by the technology, as well as employer surveillance using AI.

(Bloomberg Government)

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter