magic starSummarize by Aili

OpenAI co-founder John Schulman says he will leave and join rival Anthropic

๐ŸŒˆ Abstract

The article discusses the departure of John Schulman, a key figure at OpenAI, who is leaving the company to join Anthropic, another AI startup. It also covers the changes in leadership and focus on AI safety at OpenAI.

๐Ÿ™‹ Q&A

[01] John Schulman's Departure

1. What was John Schulman's role at OpenAI?

  • John Schulman worked to refine models that go into OpenAI's ChatGPT chatbot.
  • He was the co-leader of OpenAI's post-training team that refined AI models for ChatGPT and a programming interface for third-party developers.
  • As head of alignment science, he was set to join a safety and security committee at OpenAI.

2. Why is Schulman leaving OpenAI?

  • Schulman said he is leaving OpenAI to "deepen [his] focus on AI alignment, and to start a new chapter of [his] career where [he] can return to hands-on technical work."
  • He clarified that he is not leaving due to a lack of support for new work on AI alignment at OpenAI, stating that "company leaders have been very committed to investing in this area."

3. Where is Schulman going, and who else has left OpenAI for Anthropic?

  • Schulman is joining Anthropic, an AI startup with funding from Amazon.
  • The leaders of OpenAI's superalignment team, Jan Leike and Ilya Sutskever, both left this year, with Leike joining Anthropic.

[02] Changes in Leadership and Focus on AI Safety at OpenAI

1. What happened with OpenAI's leadership and the superalignment team?

  • OpenAI disbanded its superalignment team, which had focused on trying to ensure that people can control AI systems that exceed human capability.
  • The board pushed out OpenAI co-founder and CEO Sam Altman last November, but he was later reinstated after employees protested the decision.
  • Tasha McCauley and Helen Toner, two other board members, resigned after Altman's initial removal.

2. What is OpenAI's current commitment to AI safety?

  • OpenAI is still committed to keeping 20% of its computing resources for safety initiatives.
  • The company is also working with the US AI Safety Institute to provide early access to its next foundation model, in order to collaborate on advancing the science of AI evaluations.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.