EU AI Act to pass into law after historic vote, introducing rules that ban some AI applications and impose strict requirements on ‘high-risk’ AI

Ben Wodecki, Junior Editor - AI Business

March 14, 2024

5 Min Read
After years of debate, the AI Act will slowly come into force over the next 24 months
After years of debate, the AI Act will slowly come into force over the next 24 monthsGetty

European lawmakers have passed the world’s first major law regulating AI, designed to safeguard rights, democracy and the rule of law from the potential threats posed by high-risk AI.

The measure passed 523 to 46 with 49 abstentions.

Screenshot_(1309).png

AI applications that could violate citizens’ rights will be banned, including biometric categorization systems based on sensitive characteristics and AI that manipulates human behavior.

Organizations that deploy AI systems that violate the act face fines that range from $8 million (€7.5 million) or 1.5% of global annual revenue to $38 million (€35 million) or 7% of revenue.

Thierry Breton, the EU’s Internal Market Commissioner, said that Europe is now a “global standard-setter in AI,” after the legislation passed.

View post on X

Romanian MEP Dragos Tudorache, one of the co-rapporteurs of the EU AI Act, said the act is a starting point for technology governance.

“Much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labor markets and the way we conduct warfare.”

What Now?

The EU AI Act will go through final lawyer-linguist checks. It also needs to be formally endorsed by the Council.

Once endorsed, it will enter into force twenty days after its publication in the Official Journal and be fully applicable 24 months after it enters into force.

There are some exceptions to this - bans on barred practices will apply six months after the entry into force date, codes of practice will come in nine months after entry into force and rules on general-purpose AI systems will emerge 12 months after entry into force.

The EU’s dedicated AI Office will also be set up to support businesses to start complying with the rules before they enter into force.

What is Included in The EU AI Act?

The EU AI Act is the world's first comprehensive regulation governing AI. It introduces a rules-based system that categorizes all AI systems based on their potential to impact citizens' rights. Those more likely to impede civil liberties would be subject to strict rules or could be banned outright.

The Obligations The EU AI Act Introduces are :

Emotion recognition in the workplace and schools, social scoring and predictive policing based on profiling a person or their characteristics would be banned. As would untargeted scraping of facial images from the internet or CCTV footage.

Law enforcement agencies can use biometric identification systems but only in circumstances where certain safeguards are met. This was a major sticking point during the years of forming the bill, with the final version stipulating that agencies can use them with prior judicial or administrative authorization for looking for a missing person or preventing a terrorist attack.

An AI system deemed high-risk would be subject to obligations including assessments of risk potentials. Users would also be required to maintain logs outlining when it was used and ensure human oversight is in place. Citizens also have a right to submit complaints about AI systems and receive explanations on how high-risk AI systems make decisions that affect their rights.

Some AI systems will be required to meet certain transparency requirements. Developers of general-purpose AI systems, like OpenAI’s GPT-4, would have to publish detailed summaries of the content used for training. The more powerful a model, the more transparency requirements it faces, including model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

AI-generated audio and visual content both images and videos – will also need to be labeled as such.

The EU AI Act also includes provisions to make testing and sandbox capabilities accessible to SMEs and startups.

Reaction

Forrester’s principal analyst Enza Iannopollo said the adoption of the AI Act “marks the beginning of a new AI era.”

“The extra-territorial effect of the rules, the hefty fines, and the pervasiveness of the requirements across the "AI value chain" mean that most global organizations using AI must – and will – comply with the act. Some of its requirements will be ready for enforcement later this year.

John Buyers, head of AI at law firm Osborne Clarke, said firms need to make the best use of the staggered periods for compliance.

“Businesses need to make good use of the intervening time to understand how the AI Act will bite on their products and services and the supply chains that feed into them, and plan capacity and resources to ensure compliance in good time."

Patrick Van Eecke, head of law firm Cooley’s European cyber, data and privacy practice, said the AI Act will have a similar impact to the EU’s General Data Protection Regulation.

“The Brussels effect should not be underestimated. The EU now has the world’s first hard-coded AI law. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR.”

Alois Reitbauer, chief technology strategist of Dynatrace expressed compliance concerns.

“It’s impossible to see how organizations will be able to comply if they aren’t first clear on what constitutes an AI model, so the EU will first need to ensure that has been clearly defined,” he said. “. For example, will the machine learning used in our mobile phones or connected thermostats be classed as an AI system?”

“There is also a danger of the EU falling behind the rest of the world if it only considers AI as a negative force to be contained. It needs to balance new regulatory controls with investments that encourage research into positive use cases for AI that can help solve some of the world’s most pressing challenges.”

This article first appeared on IoT World Today's sister site, AI Business.

About the Author(s)

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like