Ways to think about AGI — Benedict Evans
🌈 Abstract
The article discusses the history and current state of artificial general intelligence (AGI) research, including the challenges and uncertainties involved in creating a software system that is equivalent to human intelligence. It explores the various perspectives and debates around the potential risks and benefits of AGI, as well as the difficulties in predicting its development and impact.
🙋 Q&A
[01] The History and Concept of AGI
1. What is the history of the concept of AGI as described in the article?
- The article traces the concept of AGI back to a 1946 science fiction story by the author's grandfather, which explored the idea of a computer system that could provide helpful answers to any request, leading to societal panic.
- The article notes that the idea of creating software with human-level intelligence, or "artificial general intelligence," has been a topic of discussion and speculation for decades, with waves of excitement and disappointment as various approaches have been explored.
- The article mentions that the recent progress in large language models (LLMs) has sparked a new wave of excitement and debate around the potential for AGI, with some experts believing it may be closer than previously thought, while others remain skeptical.
2. How does the article characterize the current state of understanding around AGI?
- The article states that there is fundamental uncertainty around AGI, as we do not have a coherent theoretical model of what general intelligence really is or how to create it.
- It notes that the term "AGI" itself is described as a "thought experiment" or "placeholder," and that we need to be careful of circular definitions or presuming the nature of AGI based on our own assumptions.
- The article suggests that the current state of AGI research is more akin to an "empirical stage," where we are building and observing systems without fully understanding why they work.
3. What analogies does the article use to discuss the challenges of AGI?
- The article compares the uncertainty around AGI to the historical attempts to deduce the nature of God through philosophical reasoning, noting that this approach cannot create true knowledge.
- It also draws an analogy to the Apollo space program, where the underlying physics and engineering were well-understood, in contrast to the current state of AGI research, where we lack the equivalent theoretical foundations.
[02] Perspectives on the Risks and Benefits of AGI
1. What are the different perspectives presented in the article on the potential risks of AGI?
- The article discusses the "doomers" who argue that there is a real risk of AGI emerging spontaneously from current research and posing an existential threat to humanity, calling for urgent government action.
- However, the article also notes that this "existential risk" concern is distinct from more immediate concerns about how governments and companies might abuse or misuse AI technology, such as for surveillance or deepfakes.
2. How does the article characterize the potential benefits and risks of AGI in a broader context?
- The article suggests that, like other technologies, the impact of AGI could be both positive and negative, with the potential for both great benefits and significant harms.
- It notes that over the past 200 years, automation has mostly been a "very good thing" for humanity, despite the initial "frictional pain" experienced by some, and argues that we should want more of it.
- However, the article also acknowledges the fundamental uncertainty around AGI and the difficulty in predicting its characteristics and consequences, drawing a contrast to more well-understood risks like meteorite impacts.
3. What is the author's preferred approach to addressing the risks of AGI?
- The article suggests that, given the fundamental uncertainty around AGI and the difficulty in preventing its development, the author's preferred approach is to assume that it will eventually emerge and become "just more software and more automation," with the associated benefits and risks.
- The article argues that, like other technological advancements, we should expect AGI to produce "more pain and more scandals," but that "life will go on," rather than catastrophic outcomes.