What GPT-4o illustrates about AI Regulation
๐ Abstract
The article discusses the recent "95 Theses on AI" published by Sam Hammond of the Foundation for American Innovation, and the author's perspectives on AI regulation approaches.
๐ Q&A
[01] The author's perspectives on Sam Hammond's "95 Theses on AI"
1. What are the key ideas from Sam Hammond's "95 Theses on AI" that the author agrees with?
- The Biden Executive Order's reporting requirements for frontier AI models (10^26 flops or higher) are basically fine, as there is no widespread consensus about the future capabilities of such models.
- The diffusion of AI will require broad deregulation across many economic sectors.
- Potential second and third-order consequences of AGI, a hypothetical and nebulous future AI system, could be politically destabilizing.
2. What is the one thesis from Hammond's work that the author believes deserves greater attention? The author agrees with Hammond's thesis that the "dogma that we should only regulate technologies based on 'use' or 'risk' may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches."
[02] The author's perspectives on different approaches to AI regulation
1. What are the three broad ways the author suggests for approaching AI regulation?
- Model-level regulation: Formal oversight and regulatory approval for frontier AI models.
- Use-level regulation: Regulations for each anticipated downstream use of AI.
- Conduct-level regulation: A broadly technology-neutral approach, updating existing laws to address new crimes enabled by AI.
2. Which approach does the author favor, and why? The author favors the conduct-level regulation approach, as it focuses on the desired outcomes and standards for personal and commercial conduct, rather than policing the use of a rapidly changing technology.
3. How does the author illustrate the distinction between use-level and conduct-level regulation using the example of GPT-4o's emotion inference capability? The author explains that under the European Union's use-based AI Act, GPT-4o's ability to infer emotions using a person's camera would be illegal in workplaces and educational institutions. However, the author argues that this does not make sense, as it is entirely lawful for a human to infer emotions by looking at someone's facial expression. The author's preferred conduct-based approach would not have this issue, as it would focus on the desired standards of conduct rather than policing the use of the technology.