The World’s First AI Regulation Act Is Finally Here
🌈 Abstract
The article discusses the key takeaways from the EU Artificial Intelligence Act, which is the world's first comprehensive AI law that came into force on August 1st, 2024. The act aims to ensure that all AI systems deployed in the EU are safe, transparent, and traceable.
🙋 Q&A
[01] The EU Artificial Intelligence Act
1. What are the four risk groups that AI systems in the EU are divided into?
- Minimal to No Risk: Includes AI-enabled video games, spam filters, and AI used in scientific research that poses the least risk to human safety.
- Limited Risk: Includes systems that pose risks due to lack of transparency, such as AI chatbots or AI-generated content, which are now obligated to declare this.
- High Risk: Includes AI systems used in critical domains like education, recruitment, management of vital infrastructure, migration and border control, credit scoring, and administration of justice. These systems are subject to strict obligations before they can be put on the market.
- Unacceptable Risk: Includes systems that clearly threaten people's safety, livelihoods, and rights, such as social scoring systems, real-time remote biometric identification for law enforcement, manipulative systems, and systems that encourage dangerous behaviors. These are completely banned.
2. What are the penalties for non-compliance with the EU Artificial Intelligence Act?
- Highest-tier penalties for putting forward AI systems at an unacceptable level of risk include fines of up to €35 million or 7% of global annual turnover.
- Mid-tier penalties for failing to meet the requirements for high-risk AI systems include fines of up to €20 million or 4% of global annual turnover.
- Lower-tier penalties for less severe violations (such as providing misleading information or not disclosing AI content) include fines of up to €7.5 million or 1.5% of global annual turnover.
3. How does the act define 'AI Systems'? The act defines 'AI System' as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This definition focuses on functionality and autonomy rather than the complexity of the specific technology, allowing the regulation to remain relevant as AI technology evolves.
4. What entities does the EU Artificial Intelligence Act apply to? The act applies to the following entities, regardless of their location, if they operate in the EU market:
- Providers: Developers/creators of AI systems
- Deployers: Users of AI systems
- Importers: EU-based entities introducing non-EU AI systems
- Distributors: Supply chain entities making AI systems available
5. What areas are excluded from the EU Artificial Intelligence Act? The act does not apply to:
- Open-source AI (with obvious unacceptable risk exceptions)
- Systems used for purely research purposes
- Military/Defense systems
- Use for National security activities
6. What are the separate strict rules for General-Purpose AI (GPAI) models? GPAI models are defined as AI models that display significant generality and are capable of competently performing a wide range of distinct tasks. Providers of GPAI models must:
- Provide detailed technical documentation of the model
- Help downstream users understand the capabilities and limitations of the model
- Publish a summary of the data/content used to train the model
- Comply with EU copyright laws GPAI models that exceed a certain threshold of computational resources used for training are considered to have systemic risk and must conduct model evaluations, adversarial testing, track and report serious incidents, and ensure cybersecurity protections.