The EU's AI Act is now in force | TechCrunch
๐ Abstract
The article discusses the European Union's new risk-based regulation for applications of artificial intelligence, which came into force on August 1, 2024. The regulation takes a tiered approach, with most AI applications considered low/no-risk and not subject to the regulation, a subset classified as high-risk that will require compliance with risk and quality management obligations, and a "limited risk" tier for technologies like chatbots and deepfake tools that will have transparency requirements. The regulation also includes rules for developers of general purpose AI (GPAI) models, with the most powerful models expected to undertake risk assessment and mitigation measures. The article also notes that the specific requirements for high-risk AI systems are still being developed by European standards bodies, with a target completion date of April 2025.
๐ Q&A
[01] Overview of the EU's AI Regulation
1. What are the key aspects of the EU's new AI regulation?
- The regulation takes a tiered approach, with most AI applications considered low/no-risk and not subject to the regulation
- A subset of AI applications are classified as high-risk, such as biometrics, facial recognition, and AI used in domains like education and employment, which will require compliance with risk and quality management obligations
- A "limited risk" tier applies to AI technologies like chatbots and deepfake tools, which will have transparency requirements
- The regulation also includes rules for developers of general purpose AI (GPAI) models, with the most powerful models expected to undertake risk assessment and mitigation measures
- Penalties for violations range from up to 7% of global annual turnover for banned AI applications to up to 1.5% for supplying incorrect information to regulators
2. When will the different aspects of the regulation come into force?
- The regulation came into force on August 1, 2024
- Most provisions will be fully applicable by mid-2026
- The first deadline, which enforces bans on a small number of prohibited uses of AI in specific contexts, will apply in just six months' time (by February 2025)
3. How are high-risk AI systems defined and what are the requirements for them?
- High-risk AI systems are those used in domains like biometrics, facial recognition, medical software, and education/employment
- Developers of these high-risk systems will need to ensure compliance with risk and quality management obligations, including undertaking a pre-market conformity assessment
- High-risk systems used by public sector authorities or their suppliers will also have to be registered in an EU database
- The specific requirements for high-risk AI systems are still being developed by European standards bodies, with a target completion date of April 2025
[02] Regulation of General Purpose AI (GPAI)
1. How does the regulation approach general purpose AI (GPAI) models?
- Most GPAI developers will face light transparency requirements, such as providing a summary of training data and committing to policies to ensure they respect copyright rules
- A subset of the most powerful GPAI models, defined as those trained using a total computing power of more than 10^25 FLOPs, will be expected to undertake risk assessment and mitigation measures
- Rules for GPAI are enforced at the EU level, rather than being devolved to member state-level bodies
2. What guidance has OpenAI provided for GPAI developers on complying with the regulation?
- OpenAI has stated it anticipates working closely with the EU AI Office and other authorities as the new law is implemented
- This includes putting together technical documentation and other guidance for downstream providers and deployers of its GPAI models
- OpenAI has advised organizations to first attempt to classify any AI systems in scope, identify GPAI and other AI systems used, and consider what obligations flow from their use cases
- OpenAI recommends consulting with legal counsel if organizations have questions about their compliance obligations