In a groundbreaking move, Members of the European Parliament (MEPs) have reached a historic political deal with the Council, solidifying a comprehensive set of regulations aimed at ensuring the safe and ethical use of artificial intelligence (AI) in Europe. The provisional agreement on the Artificial Intelligence Act reflects a commitment to balancing innovation with the protection of fundamental rights, democracy, and environmental sustainability.
Safeguards and restrictions
The agreed-upon regulations encompass a range of safeguards and prohibitions:
• Biometric categorization systems: The use of biometric categorization systems based on sensitive characteristics such as political beliefs, religion, philosophy, sexual orientation, and race is strictly prohibited.
• Facial recognition: The untargeted scraping of facial images from the internet or CCTV footage for the creation of facial recognition databases is banned.
• Emotion recognition: Employing AI for emotion recognition in workplaces and educational institutions is off-limits.
• Social Scoring: AI-driven social scoring based on personal behavior or characteristics is prohibited.
• Manipulative AI: Systems designed to manipulate human behavior and exploit vulnerabilities are not allowed, particularly concerning age, disability, social or economic situations.
Law enforcement exceptions
While recognizing the importance of law enforcement applications, negotiators have established safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces. This is subject to prior judicial authorization and limited to specific lists of crimes. Both "post-remote" and "real-time" RBI come with strict conditions to ensure targeted searches and prevent abuse.
High-risk system obligations
For AI systems classified as high-risk due to potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law, explicit obligations have been outlined. This includes mandatory fundamental rights impact assessments, extending to the insurance and banking sectors. High-risk AI systems influencing elections and voter behavior fall under this category, giving citizens the right to launch complaints and receive explanations for decisions affecting their rights.
General artificial intelligence systems
Recognizing the diverse capabilities of general-purpose AI (GPAI) systems, transparency requirements have been established. Technical documentation, compliance with EU copyright law, and dissemination of detailed training content summaries are mandated. High-impact GPAI models with systemic risk face more stringent obligations, including model evaluations, risk assessments, adversarial testing, and reporting on incidents and energy efficiency.
Supporting innovation and SMEs
To foster innovation, especially among small and medium-sized enterprises (SMEs), the agreement promotes regulatory sandboxes and real-world testing. National authorities will establish these frameworks to allow businesses to develop and train innovative AI solutions before market placement.
Sanctions and entry into force
Strict penalties are in place for non-compliance, with fines ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, depending on the nature of the infringement and the size of the company.
Voices of approval
Co-rapporteurs Brando Benifei and Dragos Tudorache expressed satisfaction with the deal, emphasizing the significance of this legislation in placing rights and freedoms at the forefront of AI development. They highlighted the importance of correct implementation, continued parliamentary oversight, and the impact the AI Act will have on Europe's digital future.
Next steps
The agreed-upon text will now undergo formal adoption by both the Parliament and Council to become EU law. Committees within Parliament, such as Internal Market and Civil Liberties, are scheduled to vote on the agreement in the near future, marking a critical step toward the realization of this comprehensive regulatory framework for trustworthy AI.