European Commission approves revolutionary AI Act

To regulate the burgeoning field of Artificial Intelligence (AI) and its potential impact on society, the European Commission has announced the approval of the AI Act.

Initially proposed in April 2021, these comprehensive regulations, set to be universally implemented across all Member States, will provide a robust framework for AI to protect EU citizens and help drive the development of AI technologies.

The finalised political agreement awaits formal approval from both the European Parliament and the Council. Once published in the Official Journal, it will come into effect after 20 days.

The AI Act’s enforcement will begin two years after its enactment, although certain clauses will take effect sooner: specific prohibitions in six months and regulations concerning General Purpose AI in twelve.

During the interim phase, before the regulation takes full effect, the Commission plans to introduce an AI Pact. This initiative aims to gather AI developers globally, particularly from Europe, to voluntarily adhere to essential AI Act obligations before the mandated deadlines.

Ursula von der Leyen, President of the European Commission, commented: “Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society.

“Therefore, I very much welcome today’s political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide.

“So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today’s agreement will foster responsible innovation in Europe.

“By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”

Measures introduced by the AI Act

The regulations are designed around a future-proof definition of AI and adopt a risk-based approach. These include:

Minimal risk applications

A vast majority of AI systems, such as recommender systems and spam filters, fall into the category of minimal risk.

These applications, posing minimal or no threat to citizens’ rights or safety, will benefit from a free pass, exempt from strict obligations.

However, companies have the option to commit voluntarily to additional codes of conduct for these systems.

High-risk systems

Identified high-risk AI systems will face stringent requirements, including robust risk-mitigation measures, high-quality datasets, activity logging, detailed documentation, transparent user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.

Regulatory sandboxes will facilitate responsible innovation in developing compliant AI systems.

Examples of high-risk AI systems encompass critical infrastructures in fields like water, gas, and electricity; medical devices; systems dictating access to education or recruitment; and certain systems used in law enforcement, border control, and democratic processes.

Biometric identification, categorisation, and emotion recognition systems also fall under this high-risk umbrella.

Unacceptable risk

AI systems posing a clear threat to fundamental human rights will be outright banned. This includes systems manipulating human behaviour, such as toys encouraging risky behaviour in minors and ‘social scoring’ systems.

Additionally, certain uses of biometric systems, like emotion recognition in workplaces or real-time remote biometric identification in publicly accessible spaces, will face prohibition, barring narrow exceptions.

Specific transparency risks

Transparency will be pivotal when employing AI systems like chatbots. Users must be informed when interacting with machines, and AI-generated content, deep fakes, and biometric categorisation systems must be labelled as such.

Providers need to ensure synthetic content is detectable as artificially generated or manipulated.

Punishment for non-compliance

Non-compliance with these regulations will attract fines, scaling with the severity of violations. Violations of banned AI applications could result in fines of up to €35m or 7% of global annual turnover, while other breaches might incur penalties of up to €15m or 3%. For supplying incorrect information, fines could reach €7.5m or 1.5%.

General purpose AI and governance

Dedicated rules for general-purpose AI models will enforce transparency along the value chain under the AI Act. Stringent obligations will manage risks associated with powerful models, with codes of practices developed by industry and stakeholders.

National competent authorities will supervise at a local level, while the new European AI Office will oversee enforcement at a broader European level. Expected to become an international reference point, the AI Office will be instrumental in implementing and enforcing binding rules on AI.

A scientific panel of independent experts will play a central role in monitoring systemic risks associated with general-purpose models, contributing to their classification and testing.

The European Commission’s move to regulate AI represents a significant step towards ensuring responsible innovation and safeguarding citizens against potential threats arising from AI systems.

These stringent rules aim to strike a delicate balance between fostering innovation and protecting fundamental human rights in an increasingly AI-driven world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network