Council approves the first worldwide rules on Artificial Intelligence

EEAS

The Council, in a collaborative effort, has approved a groundbreaking law that aims to harmonise rules on artificial intelligence, known as the Artificial Intelligence Act. This flagship legislation follows a ‘risk-based’ approach, meaning that the stricter the rules, the higher the risk of causing harm to society. It is the first of its kind worldwide and could set a global standard for AI regulation. The AI Act is a critical element of the EU’s policy to promote the development and adoption of safe and lawful AI across the single market while respecting fundamental rights. 

In April 2021, Thierry Breton, Commissioner for Internal Market, submitted the AI Act proposal. European Parliament’s rapporteurs on this file were Brando Benifei (S&D / IT) and Dragoş Tudorache (Renew Europe / RO). Consequently, a provisional agreement between the co-legislators was reached on 8 December 2023. 

The new law aims to promote developing and adopting safe and trustworthy AI systems across the EU’s single market by private and public actors. At the same time, it seeks to ensure respect for EU citizens’ fundamental rights and stimulate investment and innovation in artificial intelligence in Europe. The AI Act applies only to areas within EU law and provides exemptions, such as for systems used exclusively for military defence and research purposes.

“The adoption of the AI Act is a significant milestone for the European Union. This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasises the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Mathieu Michel, Belgian secretary of state for digitisation, administrative simplification, privacy protection, and the building regulation said.

AI systems’ classification according to risks

The new law categorises AI based on risk levels. Low-risk AI has light transparency obligations, while high-risk AI faces stricter requirements to enter the EU market. The law bans specific AI systems, such as cognitive behavioural manipulation and social scoring. It also prohibits using AI for predictive policing based on profiling and using biometric data to categorise people. The law also addresses the use of general-purpose AI models, imposing limited requirements on low-risk models but stricter rules on those posing systemic risks.

A new governance architecture

The AI Act establishes new governing bodies to ensure effective enforcement and advisory support. These include: 1. An AI Office within the Commission, responsible for enforcing standard rules across the EU; 2. A scientific panel of independent experts, providing support for enforcement activities; 3. An AI Board with representatives from member states, advising and assisting the Commission and member states on the consistent and effective application of the AI Act; and 4. An advisory forum for stakeholders, offering technical expertise to the AI Board and the Commission. These bodies form a robust governance architecture for the AI Act.

Transparency and fundamental rights protection

Before any organisation providing public services deploys a high-risk AI system, it must assess its impact on fundamental rights. The regulation, with a strong focus on protecting fundamental rights, also requires increased transparency in developing and using high-risk AI systems. Public entities and certain users of high-risk AI systems must be registered in the EU database for high-risk AI systems. Additionally, users of an emotion recognition system must inform individuals when they are exposed to such a system.

Measures in support of innovation

The AI Act provides an innovation-friendly legal framework and aims to promote evidence-based regulatory learning. It includes provisions for AI regulatory sandboxes, which enable a controlled environment for developing, testing, and validating innovative AI systems. The fines for infringements are set as a percentage of the offending company’s global annual turnover or a predetermined amount, whichever is higher. After being signed, the legislative act will be published in the EU’s Official Journal and enter into force twenty days after this publication. The new regulation will apply two years after it enters into force, with some exceptions for specific provisions.

Explore more