magic starSummarize by Aili

Feds appoint “AI doomer” to run AI safety at US institute

🌈 Abstract

The article discusses the appointment of Paul Christiano as the head of the US AI Safety Institute, which is part of the National Institute of Standards and Technology (NIST). It covers the controversy surrounding Christiano's "AI doomer" views and the concerns raised by NIST staffers about his association with effective altruism and longtermism.

🙋 Q&A

[01] Christiano's Appointment as Head of AI Safety

1. What are the key points about Christiano's appointment as head of AI safety at NIST?

  • Paul Christiano, a former OpenAI researcher, has been appointed as the head of the US AI Safety Institute, which is part of NIST.
  • Christiano is known for pioneering a foundational AI safety technique called reinforcement learning from human feedback (RLHF) and for predicting a "50 percent chance of AI development ending in 'doom.'"
  • Some fear that by appointing Christiano, who is seen as an "AI doomer," NIST may be risking encouraging non-scientific thinking that is viewed as speculation by critics.

2. What were the concerns raised by NIST staffers about Christiano's appointment?

  • There were rumors that NIST staffers opposed Christiano's hiring, with some allegedly threatening to resign.
  • The staffers reportedly feared that Christiano's association with effective altruism and longtermism could compromise the institute's objectivity and integrity.

3. How does Christiano's background and research experience relate to his role as head of AI safety?

  • Christiano has experience in mitigating AI risks, having left OpenAI to found the Alignment Research Center (ARC), which focuses on aligning future machine learning systems with human interests.
  • ARC's mission includes testing whether AI systems are evolving to manipulate or deceive humans, and conducting research to help AI systems scale "gracefully."
  • Some, like Divyansh Kaushik, believe Christiano is "extremely qualified" for testing AI models for chemical, biological, radiological, and nuclear risks.

[02] Concerns about "AI Doomer" Discourse

1. What are the concerns raised about the focus on "AI doomer" discourse?

  • Critics argue that focusing on potentially overblown talk of hypothetical killer AI systems or existential AI risks may stop humanity from addressing current perceived harms from AI, such as environmental, privacy, ethics, and bias issues.
  • Emily Bender, a University of Washington professor, believes that the inclusion of "weird AI doomer discourse" in President Biden's AI executive order has led NIST to worry about "fantasy scenarios" instead of more pressing issues.

2. How does Bender view the underlying problem with the "AI safety narrative"?

  • Bender argues that the "fundamental problem with the AI safety narrative is that it takes people out of the picture."
  • She believes the focus should be on "what people do with technology, not what technology autonomously does."
Shared by Daniel Chen ·
© 2024 NewMotor Inc.