Summarize by Aili
World’s first major AI law enters into force — here's what it means for U.S. tech giants
🌈 Abstract
The article discusses the European Union's landmark artificial intelligence (AI) law, the AI Act, which officially enters into force on Thursday. The AI Act aims to govern the way companies develop, use, and apply AI, and it will primarily target large U.S. technology companies that are the primary builders and developers of advanced AI systems.
🙋 Q&A
[01] What is the AI Act?
- The AI Act is a piece of EU legislation that sets out a comprehensive and harmonized regulatory framework for AI across the EU.
- It takes a risk-based approach, where different applications of AI are regulated differently depending on the level of risk they pose to society.
- For "high-risk" AI applications, strict obligations are introduced, such as adequate risk assessment and mitigation, high-quality training datasets, logging of activity, and sharing of detailed documentation with authorities.
- The law also imposes a blanket ban on "unacceptable-risk" AI applications, such as social scoring systems, predictive policing, and the use of emotional recognition technology in the workplace or schools.
[02] What does the AI Act mean for U.S. tech firms?
- U.S. tech giants like Microsoft, Google, Amazon, Apple, and Meta will be among the most heavily-targeted companies under the new rules, as they are the primary builders and developers of advanced AI systems.
- The AI Act will bring much more scrutiny on tech giants when it comes to their operations in the EU market and their use of EU citizen data.
- Meta has already restricted the availability of its AI model in Europe due to regulatory concerns, although this was not necessarily due to the EU AI Act.
- Other governments should look to the EU's AI Act as a blueprint for their own respective AI policies, as it provides a "risk-based regulatory framework" that encourages innovation while prioritizing the safe development and deployment of the technology.
[03] How is generative AI treated under the AI Act?
- Generative AI is labeled in the EU AI Act as an example of "general-purpose" artificial intelligence, which refers to tools that can accomplish a broad range of tasks on a similar level to humans.
- For these systems, the AI Act imposes strict requirements such as respecting EU copyright law, issuing transparency disclosures on how the models are trained, and carrying out routine testing and adequate cybersecurity protections.
- The EU sets out some exceptions for open-source generative AI models, but they must meet certain criteria to qualify for the exemption, such as making their parameters publicly available and enabling access, usage, modification, and distribution of the model.
[04] What are the penalties for breaching the AI Act?
- Companies that breach the EU AI Act could be fined between 35 million euros ($41 million) or 7% of their global annual revenues, whichever amount is higher, up to 7.5 million euros or 1.5% of global annual revenues.
- The size of the penalties will depend on the infringement and the size of the company fined.
- Oversight of all AI models that fall under the scope of the Act will be under the European AI Office, a regulatory body established by the European Commission in February 2024.
- However, most of the provisions under the law won't actually come into effect until at least 2026, and generative AI systems that are currently commercially available will be granted a "transition period" of 36 months to get their systems into compliance.
Shared by Daniel Chen ·
© 2024 NewMotor Inc.