magic starSummarize by Aili

AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins

๐ŸŒˆ Abstract

The article discusses the views and work of Eliezer Yudkowsky, a decision theorist and co-founder of the Machine Intelligence Research Institute. It covers topics such as Yudkowsky's perspectives on AI, the Singularity, Bayes' Theorem, and his responses to various questions about his background, beliefs, and vision for the future.

๐Ÿ™‹ Q&A

[01] Yudkowsky's Background and Beliefs

1. What does Yudkowsky tell people he does when asked at a party? Depending on the venue, Yudkowsky says he is a "decision theorist", a "cofounder of the Machine Intelligence Research Institute", or he talks about his fiction writing if it's not that kind of party.

2. Is Yudkowsky religious in any way? No, Yudkowsky is not religious. He believes that humanity should take an "Oops" attitude towards religion, admitting mistakes and moving on.

3. What is Yudkowsky's view on college? Yudkowsky believes that college has mostly become a "positional good", and that previous efforts to increase student loans have just increased the cost of college and the burden of graduate debt.

4. Why does Yudkowsky write fiction? Yudkowsky writes fiction to convey experience, as opposed to nonfiction which conveys knowledge. He believes that if you want someone to feel what it is to use Bayesian reasoning, you have to write a story where a character is doing that.

[02] Yudkowsky's Views on AI and the Singularity

1. How does Yudkowsky's vision of the Singularity differ from Ray Kurzweil's? Key differences include:

  • Yudkowsky doesn't think AI can be timed with Moore's Law, as it is a software problem.
  • He doesn't expect the first strong AIs to be based on algorithms discovered through neuroscience.
  • He doesn't think "human-machine merging" is a likely path to the first superhuman intelligences.
  • He doesn't believe outcomes are good by default and require hard work to make them so.

2. Does Yudkowsky think he has a shot at becoming a superintelligent cyborg? No, Yudkowsky does not want to become a cyborg. He sees adding extra details like that as a "conjunction fallacy" that makes a story sound more plausible without independent evidence.

3. Does Yudkowsky think he has a shot at immortality? Yudkowsky believes that literal immortality is very difficult to achieve, as it would require being wrong about the expected fate of the expanding universe and the basic character of physical law. He sees his own desire for longevity as more of an abstract want than something he can truly imagine.

[03] Yudkowsky's Perspective on the Singularity Debate

1. Why does Yudkowsky disagree with the view that the Singularity is an "escapist, pseudoscientific" fantasy? Yudkowsky argues that you can't forecast empirical facts about the Singularity by psychoanalyzing people. The key is to look at the underlying computer science and algorithmic landscape to reason about whether there could be an "intelligence explosion" where self-improving AI rapidly gains capability. This is an empirical question, not one that can be settled by observing people's motivations.

2. Does Yudkowsky's wife Brienne believe in the Singularity? Brienne says she doesn't "believe in" the Singularity in the same way one might "believe in" robotic trucking - it's just an obvious technological development if nothing weird happens, not something requiring faith. She is confident an intelligence explosion will occur, but is less sure about the other specifics often associated with the "Singularity" concept.

3. Can we create superintelligences without knowing how our brains work? Yudkowsky argues that just as we can build airplanes without fully understanding bird biology, we can create superintelligent AI without a complete understanding of the human brain. However, pushing machine intelligence far enough will require some high-level notions about human cognition.

[04] Yudkowsky's Perspectives on Superintelligence

1. What would superintelligences want, and would they have sexual desire? Yudkowsky argues that the "what would superintelligences want" question is misguided - superintelligences would not be a "weird tribe of people" but rather the vast space of cognitive possibilities outside the tiny "dot" of human minds. Their wants would depend on how they were deliberately designed, not on some innate superintelligent nature. Giving them sexual desires would require very specific architectural choices.

2. Does Yudkowsky think superintelligences would be nonviolent? No, Yudkowsky does not think this is a naive view. He argues that violence is not "stupid" from the perspective of an agent's terminal values - a paperclip maximizer, for example, would not see disassembling humans as a mistake. The key is to create superintelligences that are aligned with human values, not assume they will naturally be nonviolent.

3. Will superintelligences solve the "hard problem" of consciousness? Yudkowsky believes that superintelligences will solve the hard problem of consciousness, and that the solution will in retrospect seem obvious from our current perspective.

4. Will superintelligences possess free will? Yudkowsky believes superintelligences will possess free will, but they will not have the "illusion of free will" that humans experience.

Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.