AI Act: Strict rules for high-risk AI to be applied from 2027
523 votes in favor, 46 votes against, and 49 abstentions: The AI Act was just approved by a large majority in the EU Parliament. As reported several times, the AI law in the European member states is intended to regulate which AI applications, AI models and high-risk systems may be used for which purposes – and for which not. The AI Act also provides for strict bans on a wide range of use cases as well as strict rules for providers of so-called general-purpose AI systems (GPAI), such as OpenAI’s GPT-4 or Google’s Gemini.
After approval in the EU Parliament, it is now expected that the AI regulation will come into force before the end of the current legislative period of the EU Parliament, which ends in 2024 with the upcoming EU elections. After that, however, it does not come into effect immediately, but there are transition periods, namely the following:
- 24 months after entry into force, i.e. mid-2026, the AI Act will be fully applicable, with the exception of:
- Bans on certain AI applications will apply six months after they come into force, i.e. from the end of 2024
- Codes of conduct apply from nine months after they come into force, i.e. from around the second quarter of 2025
- General AI regulations including governance apply 12 months after entry into force, i.e. mid-2025
- Obligations for high-risk systems apply 36 months after entry into force, i.e. from mid-2027
As reported, the AI Act imposes a number of bans on the use of AI systems, including:
- biometric categorization systems based on sensitive characteristics
- untargeted reading of facial images from the Internet or from video surveillance recordings to create facial recognition databases
- Recognizing emotions in the workplace and at school
- Social scoring
- predictive policing (if it relies solely on profiling or assessing a person’s characteristics)
- AI that manipulates human behavior or exploits people’s vulnerabilities
There are exceptions for real-time RBI (real-time biometric identification systems), which may be used for a limited time and geographical period and with court or official approval to specifically search for a missing person or to prevent a terrorist attack. The use of such systems afterward (“post-remote RBI”) is considered a high-risk use case that requires judicial approval and is associated with a criminal offense.
High-risk AI has a 3-year buffer period
And then of course there are the high-risk systems, which are not banned, but may only be used under strict conditions and obligations for the providers. Systems are high risk if they are used in the following areas:
- critical infrastructures
- education and training
- Employment, essential private and public services (e.g. healthcare, banking)
- certain systems in the areas of law enforcement, migration, and border management
- Justice and democratic processes (e.g. influencing elections)
Such systems must assess and reduce risks, keep usage logs, be transparent and accurate, and ensure human oversight, according to the EU Parliament. As already mentioned, the rules for high-risk systems will only apply from mid-2027.
ChatGPT & Co are also regulated
Finally, the AI Act also wants to answer what happens to the popular AI chatbots and similar GenAI systems and models. Such general-purpose AI systems (GPAI) and the GPAI models on which they are based must meet certain transparency requirements, including:
- compliance with EU copyright law
- Publish detailed summaries of the content used for training
- For the more powerful GPAI models that could represent systemic risks, additional requirements apply, including conducting model assessments, assessing and mitigating systemic risks, and incident reporting.
- Artificial or manipulated images, audio or video content (“deepfakes”) must be clearly identified as such