AI Act: The first artificial intelligence regulation in the EU
The EU Parliament is “ready to negotiate first-ever rules for safe and transparent AI”. Yesterday, the European Parliament adopted its negotiating position on the law on artificial intelligence (AI) with 499 votes in favor, 28 against, and 93 abstentions. This means that talks with the EU member states on the final form of the law can now begin. The regulations aim to encourage the adoption of human-centric and trustworthy AI and protect “health, safety, fundamental rights and democracy from its harmful effects”.
Newly launched EAIF is now the largest representative of the European AI industry
AI Rating by “Level of Risk”
“The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination, and social and environmental wellbeing,” reads the official statement of the EU Parlament.
The rules follow a risk-based approach to the danger that the AI poses (or could pose). Based on this degree, the operators of the respective AI are then given rights, but also obligations. “AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behavior or personal characteristics).”
AI applications that invade privacy and discriminate will be banned. This applies to:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location, or past criminal behavior);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and the right to privacy).
Four risk groups
The AI Act differentiates rules for different risk levels. The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
I. Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
- Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children
- Social scoring: classifying people based on behavior, socio-economic status, or personal characteristics
- Real-time and remote biometric identification systems, such as facial recognition
II. High-risk AI systems
AI systems that negatively affect safety or fundamental rights will be considered high-risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Assistance in legal interpretation and application of the law.
AI luminary Yoshua Bengio: “The AI power will be concentrated around 2 or 3 countries”
III. Generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
IV. Limited risk
Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content, for example, deepfakes.
Italian data authority stops ChatGPT for processing personal data
AI Act: Rights & Responsibilities
AI systems that significantly endanger the health, safety, and fundamental rights of people or the environment are to be considered “high-risk applications” in the future. The list includes AI systems that are used to influence voters and election results, as well as recommendation systems on social media platforms with more than 45 million users.
In general, anyone who offers AI-based products and services has obligations to comply with the AI Act. In the future, providers will have to assess and mitigate risks and register their models in the relevant EU database before they are placed on the EU market.
From now on, generative AI systems like ChatGPT must meet “transparency requirements” and disclose that the content is AI-generated. This should help to distinguish deepfake photos from real images. In addition, they must ensure that no illegal content is generated. And: AI operators must publish summaries of the proprietary data they used for training purposes. Citizens should have more rights and they should be able to file complaints about AI systems. A European Office for Artificial Intelligence is to be set up to monitor the implementation of the AI regulations.