ChatGPT can talk, but OpenAI employees sure can’t
🌈 Abstract
The article discusses the recent resignations of Ilya Sutskever and Jan Leike, two prominent figures at OpenAI, and the company's use of restrictive non-disclosure agreements (NDAs) for departing employees. It raises questions about OpenAI's commitment to transparency and safety in its pursuit of developing artificial general intelligence (AGI).
🙋 Q&A
[01] Resignations of Sutskever and Leike
1. What were the key reasons behind the resignations of Ilya Sutskever and Jan Leike from OpenAI?
- Sutskever had been involved in the boardroom revolt that led to CEO Sam Altman's temporary firing last year, though he later publicly regretted his actions and backed Altman's return.
- Leike explained that he was worried OpenAI had shifted away from a safety-focused culture.
- There was speculation that they were forced out or resigning in protest of some secret and dangerous new OpenAI project.
2. How did the resignations of Sutskever and Leike impact OpenAI's safety and alignment teams?
- Sutskever and Leike were the leaders of OpenAI's superalignment team, which focused on ensuring the safety of the company's AI systems.
- Their departures raised concerns about OpenAI's commitment to safety and transparency, as the company has not announced who, if anyone, will replace them in leading the superalignment team.
[02] OpenAI's Restrictive NDAs
1. What are the key details about OpenAI's off-boarding agreements and non-disclosure provisions?
- OpenAI's off-boarding agreements contain non-disclosure and non-disparagement provisions that forbid former employees from criticizing the company for the rest of their lives.
- If a departing employee declines to sign the document or violates it, they can lose all of their vested equity, which can be worth millions of dollars.
- This is an unusual practice, as typically vested equity is not at risk for declining or violating an NDA.
2. How does OpenAI's use of restrictive NDAs contradict the company's stated commitment to transparency and external oversight?
- OpenAI has positioned itself as a responsible actor committed to building AGI in a transparent and accountable manner, with input from the broader world.
- However, the restrictive NDAs prevent former employees, who have the most knowledge about the company's inner workings, from speaking out or providing external oversight.
- This contradiction raises questions about whether OpenAI truly intends to be as transparent and accountable as it claims.