On March 13th, the European Parliament approved the Artificial Intelligence Act (the “Act”). It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk Artificial Intelligence (AI), while at the same time boosting innovation and establishing Europe as a leader in the field of AI.
The Act establishes obligations for AI that are based on potential risks and level of impact. Certain AI applications that threaten citizens’ rights are banned.
Banned AI applications include:
- Biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
- Emotion recognition in the workplace and schools
- Social scoring and predictive policing (when based solely on profiling a person or assessing their characteristics).
- AI that manipulates human behaviour or exploits people’s vulnerabilities.
The use of biometric identification systems by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” use can only be deployed if strict safeguards are met, e.g., its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation.
Additional obligations are foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with European Union copyright law and publishing detailed summaries of the content used for training. More powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.
At approximately 450 pages, the Act is quite lengthy. A copy of the text can be found on the European Parliament’s website at: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html.